This document explains the main aspects of the Kublr Logging feature. Centralized Logging is a remote evaluation of your clusters’ health. It automatically gathers log entries to ElasticSearch from the clusters managed by the Kublr Platform and gives you single sign-on (coming soon) access to Kibana with pre-defined dashboards, which you can further customize for your needs.
Kublr Centralized Log Collection feature is built on top of the classic ElasticSearch/Logstash and Kibana stack. In addition, for better resilience a RabbitMQ collects remaining log entries between sync up sessions. On Kublr, a Cluster-level helm package with RabbitMQ, FluentD and HAproxy is deployed. FluentD collects log entries from all levels: OS, pods, Kubernetes components, including Kublr-core. RabbitMQ is configured to be primary destination for the collected logs and the Kubernetes port-forwarding feature is used for data channel between the Kublr Platform and Kublr Cluster. On the Kublr Platform side, the Kublr-central-logging helm package includes ElasticSearch, Kibana, Logstash with RabbitMQ and RabbitMQ Shovel plugin, which transfers messages to Centralized RabbitMQ from all clusters. From RabbitMQ they are digested by Logstash and stored in ElasticSearch. Kibana with single sign-on from Kublr provides convenient UI for accessing and searching log entries from all clusters. In addition to centralized log collection, local ElasticSearch and Kibana may be installed by the user. They act in parallel with a centralized log collection mechanism.
Credentials should be created. Go to the Cluster’s page and click Add Kublr Platform button.
Select the Provider.
Fill out necessary parameters (e.g. Full Kublr Platform Credentials).
Select Instance Type for Nodes more than the default (e.g. t2.2xlarge for AWS or Standard_A8_v2 for Azure).
Centralized Logging is always enabled when creating Platform.
Fill out count of master/client/data Elasticsearch nodes. In general, one data node is enough for 2-3 clusters created using Platform, but it depends on the number of logs generated by each cluster.
Note: We highly recommend using Persistence enabled for collecting logs. Otherwise your custom templates and dashboards will be deleted after a restart of teh Elasticsearch pod.
Centralized Logging feature will be installed on the Platform.
Open Dashboard from the platform’s Overview page.
Go to the Platform’s Overview page and open the Kubernetes Dashboard.
Select Kublr namespace.
Open Pods page and check that all pods are Running. Please note: It may take up to 20 minutes for all pods to fully start. RabbitMQ and port-fwd may restart periodically.
Note: In the current implementation, Logs will be collected for 2 days. If you want to collect logs for longer, follow the steps described in this article.
When a Platform is created, log in with full Kublr Platform Credentials.
Click the Add cluster button. Select Provider.
Centralized Logging is always enabled when creating a Cluster.
Click the Confirm and Install button.
The Centralized Logging feature will be installed on the Cluster.
Open the Platform and navigate to Centralized Logging from the Menu. Open the Kibana page.
When Kibana is opened to the Management page, Configure an index pattern and click the Next step button:
Select the Time Filter field name (e.g. @timestamp) and click Create index pattern button.
Navigate to Discover from the Menu.
In the right upper corner select the Time range (e.g. Last 15 minutes).
Check that the results have logs Platform and Cluster (e.g. by payload.cluster_name).
Note: Logs from the Cluster will appear approximately 20 minutes after the cluster is created.
payload.cluster_name and search to find the logs you need.
In Kibana, logs are displayed both from the Platform and from сlusters.
Centralized logging is always enabled for your clusters. If you want additional logging for your cluster, click on “Add Logging.”
When Platform is created, log in it with full Kublr Platform Credentials.
Click the Add cluster button. Select Provider.
Select Self-hosted ElasticSearch/Kibana in Logging parameter.
Select Persistence enabled, if needed.
And click the Confirm and Install button.
When the Cluster is created, go to the cluster’s Overview page -> Features, parameter Logging should have a link to Kibana.
Click link. In the opened window enter username/password from KubeConfig File that can be downloaded from the cluster’s Overview page.
Create an Index Pattern.
Navigate to Discover from the Menu and check logs.
By default, the curator is configured to delete indexes older than 7 days. You can change this setting by specifying of retentionPeriodDays parameter in custom specification. There is an example for a platform:
spec: features: logging: sinks: - centralLogging: retentionPeriodDays: 30
There is an example for a cluster with SelfHosted Elasticsearch:
spec: features: logging: sinks: - selfHosted: retentionPeriodDays: 30
To use custom specification click Customize Cluster Specification button instead of Confirm and Install when creating a platform or cluster.
Go to Config Maps. Open kublr-logging-curator-config. Click the Edit button and change unit_count from 7 to required value.
Note: You must independently calculate the resources necessary for your task and your environment.
By default, centralized logging is preconfigured to use ELK without X-Pack. If you want X-Pack installed, use custom cluster/platform specification to switch to images that includes X-Pack and set xpackEnable option equals true (add overwritten values under “logging” section of custom spec):
logging: values: elasticsearch: image: name: elasticsearch/elasticsearch cluster: xpackEnable: true kibana: image: name: kibana/kibana logstash: image: name: logstash/logstash
To use custom specification click Customize Cluster Specification button instead of Confirm and Install when creating a platform: