Kublr Release 1.20.0 (2021-01-29)

This release has a known critical issue, use Kublr 1.21.2 or later instead

Due to docker image gcr.io/kubernetes-helm/tiller:v2.14.3 discontinued in the Google Image repository in August 2021, ( related issue: “Make Tiller Image Available on Docker Hub” ), Kublr may fail to complete cluster create and update.

The cluster hangs in “Creating” or “Updating” state indefinitely or for a very long time, or goes to “Error” state, in all cases with Tiller pod unhealthy due to Tiller image not available.

All versions of Kublr before 1.21.2 (including this one), and Kublr Agent versions earlier than the ones included in Kublr 1.21.2 are affected.

The issue and available solutions are described in the troubleshooting guide on Kublr support portal.

Migration to the latest Kublr Agents and Kublr Control Plane versions or at least Kublr 1.21.2 is recommended.

Kublr Quick Start

sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.20.0

Follow the full instructions in Quick start for Kublr Demo/Installer.

The Kublr Demo/Installer is a lightweight, dockerized, limited-functionality Kublr Platform which can be used to:

  • Test setup and management of a standalone Kubernetes cluster
  • Setup a full-featured Kublr Platform

The Kublr Demo/Installer stores all of the data about the created clusters inside the Docker container. If you delete the Docker container you will lose all data about the created clusters and the Kublr platforms. However, you will not lose the clusters and the platforms themselves.

We recommend using the Kublr Demo/Installer to verify if a Kubernetes cluster can be created in your environment and to experiment with it. To manage a real cluster and experience all features, you can create a full-featured Kublr Platform in a cloud or on-premise.

Overview

The Kublr 1.20.0 release brings Kuberntes 1.19.7, RedHat Enterprise Linux 8 support, ELK 7.9.0 stack with SearchGuard plugin v45.0.0, multiple significant Azure deployment improvement including Azure Virtual Machine Scale Sets, zones and ARM resource extensions support, improved cloud deployment architecture, as well as component versions upgrades and numerous improvements and fixes in UI, UX, backend, agent, resource usage, reliability, scalability, and documentation.

Important Changes

  • Kubernetes v1.19 support (v1.19.7 by default)
  • RedHat Enterprise Linux 8 and CentOS 8 support
  • Elastic Search and Kibana v7.9.0 with SearchGuard plugin
  • Azure deployment architecture improvements (doc)
  • Azure Virtual Machine Scale Set support (doc)
  • Azure zones and zone pinning support
  • Azure ARM resources extensions and overrides support
  • Azure Ubuntu FIPS-certified VM images support
  • AWS master load balancers of NLB type support

Improvements and stability

  • Helm v3.4.0, kublr charts migrated to new stable repos (blog)
  • Azure extra object merging support (doc)
  • SelfHosted Kibana/Grafana/Prometheus/AlertManager moved to Ingress endpoints
  • vSphere custom VM setting support
  • GCP master nodes scale up support
  • Monitoring and alerting improvements
  • Azure: disable SSH access to master nodes by default
  • Azure: 443 TCP port on LB only provides access to master nodes API
  • RAW repositories support for Kubernetes artifacts (kubelet and kubectl)
  • AWS ECR docker repository support for on-premise environments
  • vSphere zone support
  • Kublr Agent installation improvements: OS packages installed are split into two categories - required and optional
  • Logging Logstash scale can be configured
  • Speed up logging helm chart deployment

Bug fixes

  • Docker repositories override for docker.io, gcr.io and elastic.co
  • KCP Prometheus: too many config reloads
  • AWS fix cluster deletion with NLB for master nodes
  • Loging update failed with imutable sg-job
  • New k8s workers nodes not adding into LB
  • Kublr stops cluster monitoring if the cluster update is partly failed
  • UI Keycloak theme error with twofactor auth enabled
  • Logs from /var/log/kublr/kubelet.log are not collected

AirGap Artifacts list

Additionally, you need to download the BASH scripts from https://repo.kublr.com

You also need to download Helm package archives and Docker images:

Supported Kubernetes versions

v1.19

v1.18

v1.17 (Deprecated in 1.21.0)

v1.16 (End of support in 1.21.0)

Components versions

Kubernetes

ComponentVersionKublr AgentNote
Kubernetes1.191.19.7-26default v1.19.7
1.181.18.15-5
1.171.17.17-5Deprecated in 1.21.0
1.161.16.15-5End of support in 1.21.0

Kublr Control Plane

ComponentVersion
Kublr Control Plane1.20.0-100
Kublr Operator1.20.0-100

Kublr Platform Features

ComponentVersion
Kuberntes
Dashboardv2.0.4
Tiller
Kublr System1.20.0-100
LocalPath Provisioner (helm chart version)0.0.12-6
Ingress1.20.0-100
nginx ingress controller (helm chart version)1.36.2
cert-manager0.14.2
Centralized Logging1.20.0-100
ElasticSearch7.9.0
Kibana7.9.0
SearchGuard45.0.0
SearchGuard Kibana plugin45.0.0
SearchGuard Admin7.9.0-45.0.0
RabbitMQ3.8.9
Curator5.8.1
Logstash7.9.0
Fluentd2.7.1
Centralized Monitoring1.20.0-100
Prometheus2.13.0
Kube State Metrics2.4.1
AlertManager0.19.0
Grafana6.5.1
Kublr KubeDB1.20.0-100
kubedb (helm chart version)v0.14.0-alpha.2

Known issues and limitations

  1. For migration from Kubernetes versions below 1.19:

    kubectl delete cm -n kube-system coredns
    
  2. Managed cluster migration from previous major Kublr version (<1.18.0) have limitations related to updgrading Kublr components to Kublr 1.20. Please refer to Kublr 1.18 migration document for more detail.

  3. Beginning November 2, 2020, progressive enforcement of rate limits for anonymous and authenticated Docker Hub usage came into effect. Learn more about the change from the article Understanding Docker Hub Rate Limiting. Kublr clusters use some images hosted on Docker Hub / docker.io (e.g. kubernetesui/dashboard:v2.0.4). As a result some cluster operations may fail due to Docker Hub rate limiting. You can avoid possible issues using one of the following solutions:

    1. If you have a paid Docker Hub account, create a docker.io secret in Kublr Control Plane and add this docker registry to the cluster specification using advanced section in Kublr cluster creation UI.
    2. Override docker.io registry with cr.kublr.com, all imagess needed for cluster installation are mirrored in this repo. Learn more about docker registry override in the Kublr documentation cluster specification reference.
  4. Index Pattern in self-hosted Kibana need to be created manually. Please refer to Kubana documentation for more information.

  5. Non OSS ELK version cannot be enabled. By default Kublr configures ELK stack without X-Pack capabilities enabled. If this capability is necessary for your deployment, postpone Kublr upgrade until Kublr 1.20.1 is available.

  6. (Critical) Due to docker image gcr.io/kubernetes-helm/tiller:v2.14.3 discontinued in the Google Image repository in August 2021, ( related issue: “Make Tiller Image Available on Docker Hub” ), Kublr may fail to complete cluster create and update.

    The cluster hangs in “Creating” or “Updating” state indefinitely or for a very long time, or goes to “Error” state, in all cases with Tiller pod unhealthy due to Tiller image not available.

    All versions of Kublr before 1.21.2 (including this one), and Kublr Agent versions earlier than the ones included in Kublr 1.21.2 are affected.

    The issue and available solutions are described in the troubleshooting guide on Kublr support portal.

    Migration to the latest Kublr Agents and Kublr Control Plane versions or at least Kublr 1.21.2 is recommended.