This document explains main aspects of Kublr general architecture. Kublr is a Kubernetes management platform which accelerates and controls the deployment, scaling, monitoring and management of your Kubernetes clusters.
The diagram below describes two main deployment types for Kublr:
The Kublr Platform (at the top-left) is the main management portal for your Kubernetes clusters.
It runs on Kublr Cluster, which is based on standard Kubernetes cluster with additional components, explained below.
Kublr Platform provides integrated sets of the applications, running in Kublr Cluster. They include:
Kublr Cluster is main management object for Kublr Platform. Kublr configures four architectural layers. After being correctly setup and configured—usually by a higher layer—each layer is self-sufficient, self-healing and self-reliant. Each layer functions without interruptions as long as the underlying infrastructure and layers are functional.
Layers at play:
The infrastructure layer includes virtual and/or physical machines/servers hosted in a datacenter or in a cloud (e.g. AWS, Azure). Kublr sets up and manages Kubernetes on top of virtually any infrastructure and multiple OS’s, though some types of infrastructure providers allow for better automation than others. When setting up instances on any environment—whether a Kubernetes master or node—Kublr ensures they are fully replaceable should an instance failure occur. In AWS and Azure, Kublr enables self-healing on an infrastructure level via AWS’ auto-scaling groups and Azure’s VM auto-restart. On premise virtual and physical machines, can easily be replaced manually or through infrastructure automation tools of choice in case of a failure.
The Kubernetes layer includes all standard Kubernetes components - etcd cluster, Kubernetes master components (API, scheduler, controller manager etc.), kubelet, kube-proxy, various Kubernetes add-ons, such as DNS, dashboard, overlay network provider, auto scaler etc. Kublr sets up and connects Kubernetes components on each instance so that communication between them is secure, reliable, and able to recover from failures as long as and as soon as the underlying infrastructure is recovered. Kublr uses unmodified standard Kubernetes components and Kubernetes configuration best practices ensuring secure, reliable, and standard conformant Kubernetes setup.
The Kublr Agent is a single binary running as a service on every Kublr instance, both masters and nodes, managed the Kubernetes cluster. The Kublr Agent is then responsible for configuring Kubernetes components on the machines and connecting them into a Kubernetes cluster as described above. As a result, a new Kubernetes cluster is started on the provisioned infrastructure and connected to the Kublr Control Plane with centralized authentication, monitoring, and logging.
The Kublr agent’s responsibilities include:
In addition, Kublr provides: Local logging component - responsible for collecting Kubernetes and pods logs to pass them further to Kublr Platform.
Diagram below depicts main components of Kublr Cluster and Kubernetes:
When creating a cluster with the Kublr Control Plane, Kublr works with the infrastructure provider (e.g. AWS, Azure, etc) to provision the required infrastructure (e.g. VPC, VMs, load balancers etc) and to start the Kublr Agent on the provisioned virtual or physical machines.
The diagrams below show how Kublr deploys Kublr Cluster in different environments: In Amazon Web Services, Microsoft Azure or On-Premise installation. 3 master nodes and 3 worker nodes configuration is shown.
This diagram shows typical Amazon Web Services configuration for Kublr Cluster. It has 2 IAM Roles: one for Master nodes and other for Worker nodes, having access to S3 bucket storing cluster secrets. All cluster resources except Ingress and Masters Load Balancers are created inside dedicated VPC. Worker and Master nodes are launched inside AutoScaling groups located in different Availability Zone to ensure high-availability. Worker nodes are separated with Master nodes using different security groups and routing tables. Etcd data is stored on EBS volumes created for each Master node.
This diagram shows typical Microsoft Azure cloud configuration for Kublr Cluster. New Resource Group is create per each cluster with Secrets Blob storage and Virtual Network for two Availability Sets: one for Master nodes and one for Worker Nodes to ensure high-availability. Public Load Balancer is created for to balance load between Master nodes along with Private Load Balancer, which is used for communciation between Worker Nodes and Master Nodes. Masters have “Data Disk”, which stores Etcd data.
This diagram explains how On-Premise Kublr clusters are deployed. Kublr need machines for each Master Node and Worker Nodes. These machines must have connectivity between each other. Besides this two Load Balancers needs to be provisioned: one for Masters, which is used by the Work Nodes and one for Worker Nodes.
The Kublr Centralized Monitoring feature is built on top of the Prometheus and Grafana. Each Kublr Platform managed Cluster is registered as metrics source in Prometheus. Kubernetes provides cloud, hardware, OS, kubernetes, and applications metrics using Kubernetes API from each cluster respectively. Kublr manages list of the metrics sources in the Prometheus. Grafana is integrated with Kublr Control Plane with single-sign on interface. Centralized monitoring component is deployed to the Kublr Platform as helm package.
The Kublr Centralized Log Collection feature is built on top of the classic ElasticSearch / Logstash and Kibana stack. In addition, for better resilience RabbitMQ MQ provider is used to accamulate remaining log entries between sync up sessions. On Kublr Cluster level helm package with RabbitMQ, FluentD and haproxy is deployed. FluentD collects log entries from all levels: hardware, OS, pods, Kubernetes components, including Kublr-core. RabbitMQ is configured to be primary destination for the collected logs and haproxy is used for data channel between Kublr Platform and Kublr Cluster using Kubernetes port-forwarding feature. On Kublr Platform side kublr-central-logging helm package includes ElasticSearch, Kibana, Logstash with RabbitMQ and RabbitMQ Shovel plugin. RabbitMQ shovel plugin transfers messages to Centralized RabbitMQ from all clusters. From RabbitMQ they are digested by Logstash and stored in ElasticSearch. Kibana with single-sign on from Kublr provides convenient UI for accessing and searching log entries from all clusters. In addition to centralized log collection, local ElasticSearch and Kibana may be installed by the user. They act in parallel with centralized log collection mechanism.
The Kublr Backup Controller component is deployed as Kubernetes application to Kublr Platform and it’s primary responsible for making volume snapshots from all volumes attached to the clusters as specified in the backup schedule. Besides this Backup Controller is also responsible for purging old backup snapshots and cluster restoration logic from provided snapshot.
Questions? Suggestions? Need help? Contact us.