Kublr Release 1.18.1 (2020-07-07)

Kublr Quick Start

sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.18.1

The Kublr Demo/Installer is a lightweight, dockerized, limited-functionality Kublr Platform which can be used to:

  • Test setup and management of a standalone Kubernetes cluster
  • Setup a full-featured Kublr Platform

The Kublr Demo/Installer stores all of the data about the created clusters inside the Docker container. If you delete the Docker container you will lose all data about the created clusters and the Kublr platforms. However, you will not lose the cluster and platform itself. We recommend using the Kublr Demo/Installer to verify if a Kubernetes cluster can be created in your environment and to experiment with it. To manage a real cluster and experience all features, you must create a full-featured Kublr Platform in a cloud or on-premise.

Overview

This update contains iterative patch and minor improvements, RHEL8 support (technical preview), and Kublr cluster BackUps for AWS (technical preview). We also added Kubernetes 1.16.10 and 1.17.7 with CVE fixes.

Changelog

Kubernetes 1.17.7, 1.16.10 support.

Improvements and stability

  • Cluster pre-flight checks, validation, and recommendations added
  • Default master disk size increased to 25Gb for vSphere installation
  • Agent versions parameter validation improved in the settings screen

Bug fixes

  • Error reporting is improved in Kublr API; proxied kubectl requests error reporting fixed
  • cni-calico network provider pod readiness check
  • User must be logged in to Kublr when using kubectl as simple user
  • UI: Impossible to increase boot Disk size for Master node in vSphere
  • Backup controller fixed for work on AWS
  • Kubeconfig file missing while cluster ins in the updating state
  • AWS Spotist elastigroups userData is generated incorrectly

Technical preview

RHEL/CentOS 8 supported only in agent versions: 1.17.3, 1.17.4, 1.17.7

  • for the baremetal installation, make sure that Selinux is disabled on the nodes:
    sudo setenforce permissive
    
  • on cloud installations, disabling Selinux is not necessary
  • CentOS/RHeL 8.0 is not supported, 8.1 or newer is required (https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/installation_guide/index#operating-system-requirements-for-red-hat-ceph-storage-install)
  • don’t use components (both pods and node processes) that use iptables in legacy mode. This will disrupt Kubernetes auto-detection and might render the cluster inoperable (https://github.com/projectcalico/calico/issues/3709)
  • Kublr by default installs two components that depend on iptables-legacy: NodeLocalDns and CNI provider Flannel (as part of cni-canal). For CentOS/RHeL 8 support these components must be disabled or not used.
  • Therefore, only calico and weave CNI providers are supported (On Azure only weave is supported)
  • To disable NodeLocalDns and select CNI provider, add the following lines to the custom cluster specification:
    spec:
    ...
      network:
        ...
        provider: cni-calico
        enableLocalDns: false
    

Kublr BackUp controller

  • Disabled by default from UI, you can enable this feature on settings page
  • Backup/Restore available only for AWS

Components versions

Kubernetes

ComponentVersionKublr AgentNote
Kubernetes1.17.71.17.7-4
1.17.41.17.4-8Deprecated in 1.20.0
1.17.31.17.3-10Deprecated in 1.20.0
1.16.101.16.10-2
1.16.81.16.8-8Deprecated in 1.20.0
1.16.71.16.7-6Deprecated in 1.19.0
1.16.61.16.6-6Deprecated in 1.19.0
1.16.41.16.4-10Deprecated in 1.19.0
1.15.111.15.11-11Deprecated in 1.19.0

Kublr Control Plane

ComponentVersion
Kublr Control Plane1.18.1

Kublr Platform Features

ComponentVersion
Ingress1.18.1
nginx ingress controller (helm chart version)1.36.2
cert-manager0.14.2
Centralized Logging1.18.1
ElasticSearch6.8.4
Kibana6.8.4
RabbitMQ3.8.3
Curator5.8.1
Logstash6.8.4
Fluentd2.7.1
Centralized Monitoring1.18.1
Prometheus2.13.0
Kube State Metrics2.4.1
AlertManager0.19.0
Grafana6.5.1

Known issues and limitations

  1. If you upgrade the platform from version 1.18.0 to 1.18.1, only agent version 1.18.1 will be available after the upgrade. If managed clusters are using agents of previous versions, these agent versions will need to be added manually in the settings page.

  2. For migration to 1.18 review Kublr 1.18 migration document.