Kublr Release 1.19.0 (2020-09-04)

This release has a known critical issue, use Kublr 1.21.2 or later instead

Due to docker image gcr.io/kubernetes-helm/tiller:v2.14.3 discontinued in the Google Image repository in August 2021, ( related issue: “Make Tiller Image Available on Docker Hub” ), Kublr may fail to complete cluster create and update.

The cluster hangs in “Creating” or “Updating” state indefinitely or for a very long time, or goes to “Error” state, in all cases with Tiller pod unhealthy due to Tiller image not available.

All versions of Kublr before 1.21.2 (including this one), and Kublr Agent versions earlier than the ones included in Kublr 1.21.2 are affected.

The issue and available solutions are described in the troubleshooting guide on Kublr support portal.

Migration to the latest Kublr Agents and Kublr Control Plane versions or at least Kublr 1.21.2 is recommended.

Kublr Quick Start

sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.19.0

Follow the full instructions in Quick start for Kublr Demo/Installer.

The Kublr Demo/Installer is a lightweight, dockerized, limited-functionality Kublr Platform which can be used to:

  • Test setup and management of a standalone Kubernetes cluster
  • Setup a full-featured Kublr Platform

The Kublr Demo/Installer stores all of the data about the created clusters inside the Docker container. If you delete the Docker container you will lose all data about the created clusters and the Kublr platforms. However, you will not lose the clusters and the platforms themselves.

We recommend using the Kublr Demo/Installer to verify if a Kubernetes cluster can be created in your environment and to experiment with it. To manage a real cluster and experience all features, you can create a full-featured Kublr Platform in a cloud or on-premise.

Overview

The Kublr 1.19.0 release brings a new user intefrace, full support of external clusters, restricted mode support for the Kublr control plane components, Search Guard fixes, as well as component versions upgrades and numerous improvements and fixes in UI, UX, backend, agent, resource usage, reliability, scalability, and documentation.

Important Changes

  • Kubernetes v1.18 support (v1.18.6 by default)
  • External clusters support
  • New User Interfase
  • Support for restricted PSP mode in Kublr Control Plane cluster
  • Search Guard initialization fixes

Improvements and stability

  • Customizable via UI Kubernetes patch level, Kublr agents support all Kubernetes versions with the same minor k8s versions
  • Support for advanced AWS ASG features including launch templates, mixed instance types, spot instances etc
  • AWS: NAT gateways enabled in multiple AZs for private subnets in different AZs
  • Master-only clusters fully supported in UI
  • AirGap installation: Go Binaries Repo URL added to configuration page
  • Scaling the Master group process
  • Domain configuration added to Ingress section for KCP installation
  • K8s Dashboard uses port-forward proxying instead of k8s API proxying
  • Kublr Control Plane settings page improvements
  • Custom 404 backend added for Ingress controller
  • Docker registry secrets improvements
  • Private docker registry improvements
  • KCP MongoDB and PostgreSQL replication support
  • Cluster spec validation and reporting improvements
  • The error format in the Kublr API follows Kubernetes conventions
  • UI, vSphere clusters, additional information about the template disk size and boot disk size override in cluster edit and create screens

Bug fixes

  • Imposible to update vSphere cluster
  • K8s Dashboard healthcheck fix
  • Tolerations for fluentd, nodelocaldns and ingress controller fixed to run on all tainted nodes
  • SkipTLSVerify option default for Kublr operator follows platform feature controller settings
  • Spotinst elastigroups: AWS ELB cannot be created for services with LoadBalancer type
  • Manual OnPrem: Unable to add or remove node if user uses external Identity Providers
  • Kibana logout issue
  • CertUpdater use external k8s LB instead of internal
  • KubeDB operator fails to fun on in k8s 1.18
  • Cluster update problems when disabling localnodedns
  • Kublr token authentication with generic k8s dashboard
  • Kublr seeder cannot update agent when /var/lib is mounted with noexec option
  • Ingress feature installation issue
  • Alertmanager PVC is not configurable in the monitoring helm package
  • Tag subnets on AWS for public and internal ELBs
  • Kubelet does not start on GPU nodes

Technical preview

RHEL/CentOS 8 supported only in agent versions: 1.17.9, 1.18.6

  • for the baremetal installation, make sure that Selinux is disabled on the nodes:
    sudo setenforce permissive
    
  • on cloud installations, disabling Selinux is not necessary
  • CentOS/RHeL 8.0 is not supported, 8.1 or newer is required ( RHEL Requirements )
  • don’t use components (both pods and node processes) that use iptables in legacy mode. This will disrupt Kubernetes auto-detection and might render the cluster inoperable ( issue )
  • By default Kublr installs two components that depend on iptables-legacy: NodeLocalDns and CNI provider Flannel (as a part of cni-canal). For CentOS/RHeL 8 support these components must be disabled or not used.
  • Therefore, only calico and weave CNI providers are supported (On Azure only weave is supported)
  • To disable NodeLocalDns and select CNI provider, add the following lines to the custom cluster specification:
    spec:
    ...
      network:
        ...
        provider: cni-calico # use 'cni-weave' for Azure cluster
        enableLocalDns: false
    

AirGap Artifacts list

Additionally, you need to download the BASH scripts from https://repo.kublr.com

You also need to download Helm package archives and Docker images:

Supported Kubernetes versions

v1.18

v1.17

v1.16 (Deprecated in 1.20.0)

v1.15 (End of support in 1.20.0)

Components versions

Kubernetes

ComponentVersionKublr AgentNote
Kubernetes1.181.18.6-7default v1.18.6
1.171.17.9-10
1.161.16.13-5Deprecated in 1.20.0
1.15.(11, 12)1.15.12-7End of support in 1.20.0

Kublr Control Plane

ComponentVersion
Kublr Control Plane1.19.0-84

Kublr Platform Features

ComponentVersion
Ingress1.19.0-30
nginx ingress controller (helm chart version)1.36.2
cert-manager0.14.2
Centralized Logging1.19.0-35
ElasticSearch6.8.4
Kibana6.8.4
SearchGuard25.5.0
SearchGuard Kibana plugin25.5.0
SearchGuard Admin6.8.4-25.5.0
RabbitMQ3.8.3
Curator5.8.1
Logstash6.8.4
Fluentd2.7.1
Centralized Monitoring1.19.0-31
Prometheus2.13.0
Kube State Metrics2.4.1
AlertManager0.19.0
Grafana6.5.1

Known issues and limitations

  1. Master group scaling is not supported on GCP

  2. on AWS if an NLB is used for Ingress controller rather than a Classic Load Balancer (default), you need to delete the load balancer manually on cluster delete.

    spec:
      ...
      features:
        ingress:
        ...
          values:
            nginx-ingress:
              controller:
                service:
                  externalTrafficPolicy: Local
                  annotations:
                    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    
  3. on vSphere, nodes and node groups added to a cluster after creation need to be deleted manually when deleting the cluster.

  4. (Critical) Due to docker image gcr.io/kubernetes-helm/tiller:v2.14.3 discontinued in the Google Image repository in August 2021, ( related issue: “Make Tiller Image Available on Docker Hub” ), Kublr may fail to complete cluster create and update.

    The cluster hangs in “Creating” or “Updating” state indefinitely or for a very long time, or goes to “Error” state, in all cases with Tiller pod unhealthy due to Tiller image not available.

    All versions of Kublr before 1.21.2 (including this one), and Kublr Agent versions earlier than the ones included in Kublr 1.21.2 are affected.

    The issue and available solutions are described in the troubleshooting guide on Kublr support portal.

    Migration to the latest Kublr Agents and Kublr Control Plane versions or at least Kublr 1.21.2 is recommended.