Managing Cluster Features

Starting with version 1.18.0 Kublr uses a new approach to the installation and further management of cluster features.

Kublr cluster features are software packages specially prepared for use with Kublr Platform that are installed into a cluster after the cluster Kubernetes components become functional. Some of the features are installed by Kublr by default, others may be optionally enabled by using in the cluster creation and management UI, but in all cases any feature can be enabled, disabled, or configured via the Kublr cluster specification. Currently Kublr features includes:

  • Kublr system package including global Kubernetes objects and componentsrequired by Kublr integration, such as storage class(es), web console pod etc
  • Monitoring, local and centralized,
  • Log collection, local and centralized,
  • Ingress controller,
  • Kublr Platform itself.

The main difference in the feature management approach immplemeted in Kublr 1.18.0 is that the feature installation and configuration process is separated from the Kubernetes infrastructure and cluster deployment. This prevents potential conflicts that may occur during cluster deployment and make the whole cluster deployment process more modular and robust.

This separation is based on using the Kubernetes extension mechanisms - custom resource definitions (CRDs, see detailed description in Kubernetes documentation article: Extend the Kubernetes API with CustomResourceDefinitions). Kublr deploys its CRDs, CRs, and Kublr operator into the Kubernetes cluster; Kublr operator then acts independently and just synchronizes Kublr features actual state with the required state described in the CRs.

Kubernetes - Custom Resource Definitions

The following sections describes this process in more detail.

Feature management before Kublr 1.18

The basic flow for installation of a Kubernetes cluster was the following:

  1. A user initiates creating a cluster (clicks Add Cluster or Add Platform) via Kublr Platform or Kublr-in-a-box UI, specifying parameters and selecting features at this step.

  2. Kublr connects to the selected infrastructure target (for example, a cloud account) and creates infrastructure (virtual machines, networks, load balancers etc) there.

  3. Kublr agent is deployed on the machines (either via corresponding infrastructure automation tools, or by Kublr directly via SSH).

  4. On the machines within the created infrastructure, the agent makes sure that the Kubernetes components are installed, initialized, configured, started, and the Kubernetes cluster as a whole is started. This includes certain built-in components and addons start up, such as an overlay network plugin, Kubernetes dashboard, metrics server, etc.

  5. Kublr Platform watches the cluster startup process, and as soon as the cluster Kubernetes API becomes available and Kubernetes nodes are registered and cluster is healthy, the Cluster Controller component of the Kublr platform installs the features configured for the cluster.

    Important to note that uninterrupted connectivity bethwee the Kublr Platform and the managed clusters is required for this method; so if a Kublr Platform is running in a separate location from the managed cluster, connectivity interruptions could cause issues for clusters being created.

  6. Kublr Platform checks (remotely) the installed features endpoints (URLs) to check the feature statuses and healthiness.

    Note also that with this approach, the cluster update (for example, adding a new node, or changing feature configuration) is impossible until all the features are installed successfully.

Feature List in Kublr

Feature management in Kublr 1.18

Some elements of the flow remain the same, and those that have changed are marked blue in the following process description:

  1. A user initiates creating a cluster (clicks Add Cluster or Add Platform) via Kublr Platform or Kublr-in-a-box UI, specifying parameters and selecting features at this step.
  2. Kublr connects to the selected infrastructure target (for example, a cloud account) and creates infrastructure (virtual machines, networks, load balancers etc) there.
  3. Kublr agent is deployed on the machines (either via corresponding infrastructure automation tools, or by Kublr directly via SSH).
  4. On the machines within the created infrastructure, the agent makes sure that the Kubernetes components are installed, initialized, configured, started, and the Kubernetes cluster as a whole is started. This includes certain built-in components and addons start up, such as an overlay network plugin, Kubernetes dashboard, metrics server, etc.
  5. Kublr Platform watches the cluster startup process, and as soon as the cluster Kubernetes API becomes available and Kubernetes nodes are registered and cluster is healthy, the Cluster Controller component of the Kublr platform Kublr deploys the “kublr-operator” component into the created Kubernetes cluster. Helm installed
  6. Kublr deploys the Kublr Operator’s Custom Resource Definitions (CRDs) into the cluster. Kubernetes - Custom Resource Definitions
  7. Based on the cluster specification Kublr Platform creates custom resource objects (or custom resources, CRs) for Kublr features. Kublr-СКВ-CRO
  8. The “kublr-operator” running inside the cluster initiates the feature packages instalaltion and/or configuration processed for each configured feature.
  9. As soon as the corresponding feature is deployed and healthy (pods are running and respond to requests), the “kublr-operator” via its controllers monitors the statuses of the features.
  10. The “kublr-operator” checks the installed features endpoints (URLs) to check feature statuses and reports them via the CRs.
  11. The Kublr Platform reads the feature statuses from the CRs via Kubernetes API and reports (replicates) them in the corresponding cluster spec object in the status.detailedFeatureState section.

Kublr-СКВ-CRO

Updating feature configuration

The “kublr-operator” not only deploys and monitors the features in response to CR objects creation, but it also monitors the CRs for changes and updates the features accordingly.

The CRs changes are normally initiated by Kublr Platform when a user changes the cluster configuration, but the actual feature re-configuration process is taken care of by the “kublr-operator” from inside the cluster independently from the Kublr Platfrom. Reconfiguration results are reported via the CRs’ status field as usual.

This relates to the Kublr Platform as well, as the Kublr Platfrom from this perspective is nothing more than just another Kublr feature running in a Kublr managed cluster.

Kubernetes- Feature Update Check

The platform, cluster and features upgrade

After upgrading Kublr Platform from a previous version, the administrator will see a message for all preexisting clusters registered in the platfrom suggesting to upgrade the features running in those clusters to the new versions and the new “kublr-operator” based management process.

After Platform Upgrade - Feature Upgrade Dialog

User may chose to upgrade the features at their own schedule.

Additional facts

  • If an error occurs during a feature deployment, the “kublr-operator” retries deployment again after 10 seconds, with the retry interval gradually increasing 2x from 10 seconds to 2 minutes (exponential back-off logic).
  • There are dependencies between features. Dependent feature deployment cannot be started until its dependencies deployment is finished successfully. For example, centralized logging cannot be installed before the control plane is “deployed” and “ready”. The “kublr-operator” tracks feature dependencies.
  • While all CRDs global non-namespaced objects, Kublr features’ CRs are namespaced.
  • All features both in a cluster and in a platform are installed into the “kublr” namespace, except for the “Ingress” and “KublrSystem” features that are deployed in “kube-system” namespace. The namespaces are fixed and cannot be changed. This may change in the future Kublr versions.