Kublr General Architecture Overview


This document explains main aspects of Kublr general architecture. Kublr is a Kubernetes management platform which accelerates and controls the deployment, scaling, monitoring and management of your Kubernetes clusters.

The diagram below describes two main deployment types for Kublr:

  1. Kublr Platform (Kublr Cluster based Centralized management tool for Kublr Clusters)
  2. Kublr Cluster

Kublr Control Plane

The Kublr Platform (at the top-left) is the main management portal for your Kubernetes clusters.

It runs on Kublr Cluster, which is based on standard Kubernetes cluster with additional components, explained below.

Kublr Platform provides integrated sets of the applications, running in Kublr Cluster. They include:

  1. Control Plane / UI - Management interface, allowing you to create / view / edit and delete clusters, providing access to other user interfaces, like Backup / KeyCloak, Grafana and Kibana.
  2. Centralized Monitoring, which is based on Prometheus and Grafana collecting metrics from all clusters deployed with Kublr. Cloud, hardware, OS, kubernetes, and applications metrics are collected.
  3. Centralized Logging - standard ELK (ElasticSearch, LogStash Kibana) stack for collecting logs from all your clusters. Logs on all levels are collected: hardware, OS, Kubernetes, and containers. It also includes audit logs collection components that ensure traceability and audit of all user actions.
  4. IAM: JBoss Keycloak - Open Source Identity and Access Management, providing user management, LDAP/AD integration, 2FA and other authentication features for your cluster users. This identity broker is responsible for user management, brokering integration with external user and identity providers (SAML, OIDC, LDAP, AD) and authenticating users in Kublr and Kublr managed Kubernetes clusters.
  5. Backup Controller is the component responsible for backup of your clusters, purging old backups and restoration of the clusters from backup snapshots. Cluster backup snapshots include both Kubernetes metadata and application data.

Kublr Cluster is main management object for Kublr Platform. Kublr configures four architectural layers. After being correctly setup and configured—usually by a higher layer—each layer is self-sufficient, self-healing and self-reliant. Each layer functions without interruptions as long as the underlying infrastructure and layers are functional.

Kublr Control Plane

Layers at play:

The infrastructure layer includes virtual and/or physical machines/servers hosted in a datacenter or in a cloud (e.g. AWS, Azure). Kublr sets up and manages Kubernetes on top of virtually any infrastructure and multiple OS’s, though some types of infrastructure providers allow for better automation than others. When setting up instances on any environment—whether a Kubernetes master or node—Kublr ensures they are fully replaceable should an instance failure occur. In AWS and Azure, Kublr enables self-healing on an infrastructure level via AWS’ auto-scaling groups and Azure’s VM auto-restart. On premise virtual and physical machines, can easily be replaced manually or through infrastructure automation tools of choice in case of a failure.

The Kubernetes layer includes all standard Kubernetes components - etcd cluster, Kubernetes master components (API, scheduler, controller manager etc.), kubelet, kube-proxy, various Kubernetes add-ons, such as DNS, dashboard, overlay network provider, auto scaler etc. Kublr sets up and connects Kubernetes components on each instance so that communication between them is secure, reliable, and able to recover from failures as long as and as soon as the underlying infrastructure is recovered. Kublr uses unmodified standard Kubernetes components and Kubernetes configuration best practices ensuring secure, reliable, and standard conformant Kubernetes setup.

The Kublr Agent is a single binary running as a service on every Kublr instance, both masters and nodes, managed the Kubernetes cluster. The Kublr Agent is then responsible for configuring Kubernetes components on the machines and connecting them into a Kubernetes cluster as described above. As a result, a new Kubernetes cluster is started on the provisioned infrastructure and connected to the Kublr Control Plane with centralized authentication, monitoring, and logging.

The Kublr agent’s responsibilities include:

  1. Setup and configuration of auxiliary software packages on instances and infrastructure resources for the instances, such as etcd EBS volume attachment and initialization on AWS environment;
  2. Initialization, distribution (through secret store, such as private S3 bucket or Azure storage), and rotation of shared configuration and security data, such as various certificates, keys, tokens etc. required for a secure Kubernetes setup;
  3. Initial setup and configuration of Kubernetes components including Docker, etcd, Kubernetes master components, kubelet, kube-proxy, essential add-ons, and overlay network provider components;

In addition, Kublr provides: Local logging component - responsible for collecting Kubernetes and pods logs to pass them further to Kublr Platform.

Diagram below depicts main components of Kublr Cluster and Kubernetes:

Kublr Control Plane

Deployment Details

When creating a cluster with the Kublr Control Plane, Kublr works with the infrastructure provider (e.g. AWS, Azure, etc) to provision the required infrastructure (e.g. VPC, VMs, load balancers etc) and to start the Kublr Agent on the provisioned virtual or physical machines.

The diagrams below show how Kublr deploys Kublr Cluster in different environments: In Amazon Web Services, Microsoft Azure or On-Premise installation. 3 master nodes and 3 worker nodes configuration is shown.

Amazon Web Services Deployment Scheme

This diagram shows typical Amazon Web Services configuration for Kublr Cluster. It has 2 IAM Roles: one for Master nodes and other for Worker nodes, having access to S3 bucket storing cluster secrets. All cluster resources except Ingress and Masters Load Balancers are created inside dedicated VPC. Worker and Master nodes are launched inside AutoScaling groups located in different Availability Zone to ensure high-availability. Worker nodes are separated with Master nodes using different security groups and routing tables. Etcd data is stored on EBS volumes created for each Master node.

Amazon Web Service Deployment

Microsoft Azure Deployment Scheme

This diagram shows typical Microsoft Azure cloud configuration for Kublr Cluster. New Resource Group is create per each cluster with Secrets Blob storage and Virtual Network for two Availability Sets: one for Master nodes and one for Worker Nodes to ensure high-availability. Public Load Balancer is created for to balance load between Master nodes along with Private Load Balancer, which is used for communciation between Worker Nodes and Master Nodes. Masters have “Data Disk”, which stores Etcd data.

Microsoft Azure Service Deployment

On-Premise Deployment Scheme

This diagram explains how On-Premise Kublr clusters are deployed. Kublr need machines for each Master Node and Worker Nodes. These machines must have connectivity between each other. Besides this two Load Balancers needs to be provisioned: one for Masters, which is used by the Work Nodes and one for Worker Nodes.

Kublr On-Premise Deployment Diagram

Centralized Monitoring

The Kublr Centralized Monitoring feature is built on top of the Prometheus and Grafana. Each Kublr Platform managed Cluster is registered as metrics source in Prometheus. Kubernetes provides cloud, hardware, OS, kubernetes, and applications metrics using Kubernetes API from each cluster respectively. Kublr manages list of the metrics sources in the Prometheus. Grafana is integrated with Kublr Control Plane with single-sign on interface. Centralized monitoring component is deployed to the Kublr Platform as helm package.

Centralized Monitoring

Centralized Log Collection

The Kublr Centralized Log Collection feature is built on top of the classic ElasticSearch / Logstash and Kibana stack. In addition, for better resilience RabbitMQ MQ provider is used to accamulate remaining log entries between sync up sessions. On Kublr Cluster level helm package with RabbitMQ, FluentD and haproxy is deployed. FluentD collects log entries from all levels: hardware, OS, pods, Kubernetes components, including Kublr-core. RabbitMQ is configured to be primary destination for the collected logs and haproxy is used for data channel between Kublr Platform and Kublr Cluster using Kubernetes port-forwarding feature. On Kublr Platform side kublr-central-logging helm package includes ElasticSearch, Kibana, Logstash with RabbitMQ and RabbitMQ Shovel plugin. RabbitMQ shovel plugin transfers messages to Centralized RabbitMQ from all clusters. From RabbitMQ they are digested by Logstash and stored in ElasticSearch. Kibana with single-sign on from Kublr provides convenient UI for accessing and searching log entries from all clusters. In addition to centralized log collection, local ElasticSearch and Kibana may be installed by the user. They act in parallel with centralized log collection mechanism.

Centralized Log Collection

Kublr Backup Controller

The Kublr Backup Controller component is deployed as Kubernetes application to Kublr Platform and it’s primary responsible for making volume snapshots from all volumes attached to the clusters as specified in the backup schedule. Besides this Backup Controller is also responsible for purging old backup snapshots and cluster restoration logic from provided snapshot.

Questions? Suggestions? Need help? Contact us.