This document explains the main aspects of the Kublr Logging feature. Centralized Logging is a remote evaluation of your clusters’ health. It automatically gathers log entries to ElasticSearch from the clusters managed by the Kublr Platform and gives you single sign-on (coming soon) access to Kibana with pre-defined dashboards, which you can further customize for your needs.

How Kublr Logging works?

Kublr Centralized Log Collection feature is built on top of the classic ElasticSearch/Logstash and Kibana stack. In addition, for better resilience a RabbitMQ collects remaining log entries between sync up sessions. On Kublr, a Cluster-level helm package with RabbitMQ, FluentD and HAproxy is deployed. FluentD collects log entries from all levels: OS, pods, Kubernetes components, including Kublr-core. RabbitMQ is configured to be primary destination for the collected logs and the Kubernetes port-forwarding feature is used for data channel between the Kublr Platform and Kublr Cluster. On the Kublr Platform side, the Kublr-central-logging helm package includes ElasticSearch, Kibana, Logstash with RabbitMQ and RabbitMQ Shovel plugin, which transfers messages to Centralized RabbitMQ from all clusters. From RabbitMQ they are digested by Logstash and stored in ElasticSearch. Kibana with single sign-on from Kublr provides convenient UI for accessing and searching log entries from all clusters. In addition to centralized log collection, local ElasticSearch and Kibana may be installed by the user. They act in parallel with a centralized log collection mechanism.

1. Create a Platform with Centralized Logging

Credentials should be created. Go to the Cluster’s page and click Add Kublr Platform button.

Add platform

Select the Provider.

Fill out necessary parameters (e.g. Full Kublr Platform Credentials).

Select Instance Type for Nodes more than the default (e.g. t2.2xlarge for AWS or Standard_A8_v2 for Azure).

Set parameters

Centralized Logging is always enabled when creating Platform.

Add logging

Note: We highly recommend using Persistence enabled for collecting logs. Otherwise your custom templates and dashboards will be deleted after a restart of teh Elasticsearch pod.

Centralized Logging feature will be installed on the Platform.

Open Dashboard from the platform’s Overview page.


Go to the Platform’s Overview page and open the Kubernetes Dashboard.


Select Kublr namespace.


Open Pods page and check that all pods are Running. Please note: It may take up to 20 minutes for all pods to fully start. RabbitMQ and port-fwd may restart periodically.


Note: In the current implementation, Logs will be collected for 2 days. If you want to collect logs for longer, follow the steps described in this article.

2. Create a Cluster with Centralized Logging

When a Platform is created, log in with full Kublr Platform Credentials.

Click the Add cluster button. Select Provider.

Centralized Logging is always enabled when creating a Cluster.

Click the Confirm and Install button.

The Centralized Logging feature will be installed on the Cluster.


3. Log in into Kibana for Platform and check logs

Open the Platform and navigate to Centralized Logging from the Menu. Open the Kibana page.

C Logging

Another way to open Kibana is to open the platform’s Overview page and copy API Endpoint.

In the browser’s address bar add /api/v1/namespaces/kublr/services/kublr-logging-kibana/proxy/app/kibana to API Endpoint. e.g. https:

In the opened window enter username/password from the Kube Config File that can be downloaded from Platform’s Overview page.


When Kibana is opened to the Management page, Configure an index pattern and click the Next step button:


Select the Time Filter field name (e.g. @timestamp) and click Create index pattern button.

Create index

Navigate to Discover from the Menu.

In the right upper corner select the Time range (e.g. Last 15 minutes).


Check that the results have logs Platform and Cluster (e.g. by payload.cluster_name).

Note: Logs from the Cluster will appear approximately 20 minutes after the cluster is created.

Use filter payload.cluster_name and search to find the logs you need.


In Kibana, logs are displayed both from the Platform and from —Ālusters.


4. Create a Cluster with additional self-hosted Elasticsearch/Kibana Logging

Centralized logging is always enabled for your clusters. If you want additional logging for your cluster, click on “Add Logging.”

When Platform is created, log in it with full Kublr Platform Credentials.

Click the Add cluster button. Select Provider.

Select Self-hosted ElasticSearch/Kibana in Logging parameter.

Self hosted

Select Persistence enabled, if needed.


And click the Confirm and Install button.

When the Cluster is created, go to the cluster’s Overview page -> Features, parameter Logging should have a link to Kibana.


Click link. In the opened window enter username/password from KubeConfig File that can be downloaded from the cluster’s Overview page.

Create an Index Pattern.

Navigate to Discover from the Menu and check logs.


5. Change parameters to collect Logs for more than 2 days

Open Dashboard from the cluster’s Overview page.

Select Kublr namespace.

Go to Stateful Sets. Open kublr-logging-elasticsearch. Click the Edit button and change:

ES_JAVA_OPTS (e.g. from 1g to 2g)

Java ops

resources/requests (for our e.g. from 2Gi to 4Gi).

Memory limits


Set Xmx to no more than 50% of your physical RAM to ensure that there is enough physical RAM left for kernel file system caches.

Set the minimum heap size (Xms) and maximum heap size (Xmx) equal to each other.

For additional details go to https://www.elastic.co/guide/en/elasticsearch/reference/master/heap-size.html

Go to Config Maps. Open kublr-logging-curator-config. Click the Edit button and change unit_count from 2 to required value.


Note: You must independently calculate the resources necessary for your task and your environment.

Questions? Suggestions? Need help? Contact us.