This document explains the main aspects of the Kublr Logging feature. Centralized Logging is a remote evaluation of your clusters’ health. It automatically gathers log entries to ElasticSearch from the clusters managed by the Kublr Platform and gives you a single sign-on (coming soon) access to Kibana with predefined dashboards, which you can further customize for your needs.

How Kublr Logging works?

Kublr Centralized Log Collection feature is built on top of the classic ElasticSearch/Logstash and Kibana stack. In addition, for better resilience a RabbitMQ routes and stores log entries between sync up sessions.

In each managed Kubernetes cluster, a cluster-level helm package with RabbitMQ and FluentD is deployed. FluentD collects log entries from all levels: OS, pods, Kubernetes components, and Kublr agent. RabbitMQ is configured to be the primary destination for the collected logs and the Kubernetes port-forwarding feature is used for data channel between the Kublr Platform and Kublr Cluster.

On the Kublr Platform side, the Kublr-central-logging helm package includes ElasticSearch, Kibana, Logstash with RabbitMQ and RabbitMQ Shovel plugin, which transfers messages to Centralized RabbitMQ from all clusters. From RabbitMQ they are digested by Logstash and stored in ElasticSearch. Kibana with single sign-on from Kublr provides convenient UI for accessing and searching log entries from all clusters.

In addition to the centralized log collection, local ElasticSearch and Kibana may be installed by the user in the managed clusters. They may be used instead or in addition to the centralized log collection mechanism.

See also:

1. Create a Platform with Centralized Logging

  1. Initiate creation of Kublr platform. Example of how to do that can be found here.

  2. In the DEPLOY FULL KUBLR PLATFORM dialog, click the FEATURES step.

  3. Scroll to the Centralized Logging section.

    Centralized Logging is always enabled when creating Platform.

  4. Fill out count of master/client/data Elasticsearch nodes. In general, one data node is enough for 2-3 clusters created using Platform, but it depends on the number of logs generated by each cluster.

    Note We highly recommend using Persistence enabled for collecting logs. Otherwise your custom templates and dashboards will be deleted after a restart of the Elasticsearch pod.

    Add logging

  5. Complete creation of Kublr platform. Centralized Logging feature is installed on the Platform.

  6. To open Kubernetes Dashboard, use the platform page, CLUSTER tab, Open Dashboard.

    Platform - CLUSTER tab - Open Dashboard

  7. In Namespace, select “kublr”.

  8. Open Pods page and check that all pods are running.

    Note It may take up to 20 minutes for all pods to fully start. RabbitMQ and port-fwd may restart periodically.


In the current implementation, logs will be collected for 2 days. If you want to collect logs for longer, follow the steps described in this article.

2. Create a Cluster with Centralized Logging

To create a cluster with centralized logging:

  1. On the left menu, click Clusters.

  2. Click Add Cluster. The Select Installation Type dialog is displayed.

  3. In the Select Installation Type dialog, click Cluster.

  4. Click Continue Setup.

  5. Set cluster parameters.

  6. Use the FEATURES tab.

  7. Select the Self-Hosted Logging checkbox.

  8. In the Elasticsearch section, specify the appropriate parameters.


  9. Finalize cluster creation.

    The Centralized Logging feature will be installed on the Cluster.

3. Log in into Kibana for Platform and check logs

Open the Platform and navigate to Centralized Logging from the Menu. Open the Kibana page.

C Logging

When Kibana is opened to the Management page, Configure an index pattern and click the Next step button:


Select the Time Filter field name (e.g. @timestamp) and click Create index pattern button.

Create index

Navigate to Discover from the Menu.

In the right upper corner select the Time range (e.g. Last 15 minutes).


Check that the results have logs Platform and Cluster (e.g. by payload.cluster_name).

Note: Logs from the Cluster will appear approximately 20 minutes after the cluster is created.

Use filter payload.cluster_name and search to find the logs you need.


In Kibana, logs are displayed both from the Platform and from сlusters.


4. Create a Cluster with additional self-hosted Elasticsearch/Kibana Logging

Centralized Logging is always enabled for your clusters. If you want additional logging for your cluster, follow the steps below:

  1. Initiate creation of Kublr cluster. Example of how to do that can be found here.

  2. In the ADD CLUSTER dialog, click the FEATURES step.

  3. Select the Self-Hosted Logging checkbox.

  4. Fill out count of master/client/data Elasticsearch nodes.

  5. If necessary, select Persistence enabled, and then optionally type in Data node disk size.

    ADD CLUSTER - FEATURES - Self-Hosted Logging

  6. Complete creation of Kublr cluster. Self-Hosted Logging feature is installed on the cluster.

  7. While the cluster is created, go to the cluster page, the CLUSTER tab, scroll to the FEATURES section.

  8. In the Logging field, click the link. The Kibana page is opened.

  9. In Kibana, enter username/password from KubeConfig file.

    The file can be downloaded from Kublr, the cluster page, CLUSTER tab.

  10. In Kibana, create an Index Pattern.

  11. To check logs, on the left menu, click Discover.


5. Change parameters to collect Logs for more than 7 days

Cluster installation phase

By default, the curator is configured to delete indexes older than 7 days. You can change this setting by specifying the retentionPeriodDays parameter in custom specifications. Here is an example for a platform:

            retentionPeriodDays: 30

Here is an example for a cluster with SelfHosted Elasticsearch:

            retentionPeriodDays: 30

To use custom specification click Customize Cluster Specification button instead of Confirm and Install when creating a platform or cluster.

After the cluster is deployed

Go to Config Maps. Open kublr-logging-curator-config. Click the Edit button and change unit_count from 7 to required value.


Note: You must independently calculate the resources necessary for your task and your environment.

6. Enabling X-Pack in Elasticsearch/Logstash/Kibana

By default, centralized logging is preconfigured to use ELK without X-Pack. If you want X-Pack installed, use custom cluster/platform specification to switch to images that includes X-Pack and set xpackEnable option equals true (add overwritten values under “logging” section of custom spec):

            name: elasticsearch/elasticsearch
            xpackEnable: true
            name: kibana/kibana
            name: logstash/logstash

To use custom specifications, click the CUSTOMIZE SPECIFICATION button and set:


7. Search Guard (ELK Multi-user access)

Kublr uses Search Guard Open Source security plugin to provide multi-user access to Elasticsearch & Kibana.

As the Community Edition is used, we implemented own Kublr roles provisioning mechanism to Search Guard, because AD/LDAP/etc are available in Search Guard Enterprise Edition only. The Kublr administrator is free from worries about configuring roles in Search Guard configuration files, except in complex cases.


By default, cetralized logging is preconfigured to use ELK with Search Guard.

To switch off Search Guard, please use following values in custom specification:

           enabled: false

Access Control

Kublr manages Search Guard roles. As soon as a new cluster is created in some space, a new Search Guard role is created. In case when cluster is deleted and purged, kublr restricts access to those indices. This may cause the entire pattern to be restricted, please see the “Cluster Removed and Purged Case” section in the logging troubleshooting page.

A role is created per space. It means that all users who have ‘List’ access to some Kublr space resource, will have access to all logs of all clusters of that space.

Searchguard roles Searchguard indices

Kublr provides default index patterns for each space created. By default, there are kublr_default* and kublr* index patterns. The first one can be used to see all logs of all clusters of ‘default’ space. The second one allows admin to get access to any logs, including logs of the Kublr platform cluster.

As Kibana Multitenancy is part of Enterprise edition of Search Guard, there is no way to hide kublr* and other index patterns that cannot be accessed by user. But Search Guard restricts access on index layer and user will not get access to indexes belong to other spaces:

Searchguard no access

At the same time a user is granted to see logs of clusters of spaces, they also have access:

Searchguard Kibana overview

Roles Customization

If it is necessary to specify permissions more narrowly, the administrator can modify the Search Guard configuration using the sgadmin utility. All necessary certificates are stored in the kublr-logging-searchguard secret of ‘kublr’ namespace of the platform cluster where centralized logging deployed.

There is a simple way to retrieve and apply Search Guard config using logging-controller pod:

$ kubectl exec -it -n kublr $(kubectl get pods -n kublr \
           -o=custom-columns=NAME:.metadata.name | grep logging-controller) /bin/bash
bash-4.4$ cd /home/centrolog
bash-4.4$ /opt/logging-controller/retrieve.sh
bash-4.4$ ls
sg_action_groups.yml   sg_config.yml  sg_internal_users.yml  sg_roles.yml  sg_roles_mapping.yml
#modify necessary files using vi
bash-4.4$ /opt/logging-controller/apply.sh

As Kublr manages space-based roles, do not use ‘kublr:’ prefix for your own roles. Please refer Search Guard documentation for guidance roles, roles mapping and other configurations.

Also, the additional information can be found on the logging troubleshooting page.


In case of misunderstanding the restrictions or access rights, it is possible to track the interaction of Kublr and Search Guard.

First of all, research kublr-logging-kibana pod, logs of sg-auth-proxy container. The following entry contains information about the user and their roles:

2019/07/02 18:27:00.809099 proxy.go:108: User '383f7ac8-8e32-4157-99c8-221c28fc1417': 
          name=michael, roles=[uma_authorization user kublr:default]

Second, retrieve Search Guard configuration files, as described above.

If you’re unsure, what attributes are accessible you can always access the /_searchguard/authinfo endpoint to check. The endpoint will list all attribute names for the currently logged in user. You can use Kibana Dev Tools and request GET _searchguard/authinfo

Also, the additional information can be found on the logging troubleshooting page.