Logging

Overview

This document explains the main aspects of the Kublr Logging feature. Centralized Logging is a remote evaluation of your clusters’ health. It automatically gathers log entries to ElasticSearch from the clusters managed by the Kublr Platform and gives you single sign-on (coming soon) access to Kibana with pre-defined dashboards, which you can further customize for your needs.

How Kublr Logging works?

Kublr Centralized Log Collection feature is built on top of the classic ElasticSearch/Logstash and Kibana stack. In addition, for better resilience a RabbitMQ collects remaining log entries between sync up sessions. On Kublr, a Cluster-level helm package with RabbitMQ, FluentD and HAproxy is deployed. FluentD collects log entries from all levels: OS, pods, Kubernetes components, including Kublr-core. RabbitMQ is configured to be primary destination for the collected logs and the Kubernetes port-forwarding feature is used for data channel between the Kublr Platform and Kublr Cluster. On the Kublr Platform side, the Kublr-central-logging helm package includes ElasticSearch, Kibana, Logstash with RabbitMQ and RabbitMQ Shovel plugin, which transfers messages to Centralized RabbitMQ from all clusters. From RabbitMQ they are digested by Logstash and stored in ElasticSearch. Kibana with single sign-on from Kublr provides convenient UI for accessing and searching log entries from all clusters. In addition to centralized log collection, local ElasticSearch and Kibana may be installed by the user. They act in parallel with a centralized log collection mechanism.

1. Create a Platform with Centralized Logging

Credentials should be created. Go to the Cluster’s page and click Add Kublr Platform button.

Add platform

Select the Provider.

Fill out necessary parameters (e.g. Full Kublr Platform Credentials).

Select Instance Type for Nodes more than the default (e.g. t2.2xlarge for AWS or Standard_A8_v2 for Azure).

Set parameters

Centralized Logging is always enabled when creating Platform.

Fill out count of master/client/data Elasticsearch nodes. In general, one data node is enough for 2-3 clusters created using Platform, but it depends on the number of logs generated by each cluster.

Add logging

Note: We highly recommend using Persistence enabled for collecting logs. Otherwise your custom templates and dashboards will be deleted after a restart of the Elasticsearch pod.

Centralized Logging feature will be installed on the Platform.

Open Dashboard from the platform’s Overview page.

Features

Go to the Platform’s Overview page and open the Kubernetes Dashboard.

Dashboard

Select Kublr namespace.

Namespace

Open Pods page and check that all pods are running. Please note: It may take up to 20 minutes for all pods to fully start. RabbitMQ and port-fwd may restart periodically.

Pods

Note: In the current implementation, logs will be collected for 2 days. If you want to collect logs for longer, follow the steps described in this article.

2. Create a Cluster with Centralized Logging

When a Platform is created, log in with full Kublr Platform Credentials.

Click the Add cluster button. Select Provider.

Centralized Logging is always enabled when creating a Cluster.

Click the Confirm and Install button.

The Centralized Logging feature will be installed on the Cluster.

Features

3. Log in into Kibana for Platform and check logs

Open the Platform and navigate to Centralized Logging from the Menu. Open the Kibana page.

C Logging

When Kibana is opened to the Management page, Configure an index pattern and click the Next step button:

Pattern

Select the Time Filter field name (e.g. @timestamp) and click Create index pattern button.

Create index

Navigate to Discover from the Menu.

In the right upper corner select the Time range (e.g. Last 15 minutes).

Kibana

Check that the results have logs Platform and Cluster (e.g. by payload.cluster_name).

Note: Logs from the Cluster will appear approximately 20 minutes after the cluster is created.

Use filter payload.cluster_name and search to find the logs you need.

Index

In Kibana, logs are displayed both from the Platform and from сlusters.

Logs

4. Create a Cluster with additional self-hosted Elasticsearch/Kibana Logging

Centralized logging is always enabled for your clusters. If you want additional logging for your cluster, click on “Add Logging”

When Platform is created, log in it with full Kublr Platform Credentials.

Click the Add cluster button. Select Provider.

Select Self-hosted ElasticSearch/Kibana in Logging parameter.

Self hosted

Select Persistence enabled, if needed.

PErsistent

Click the Confirm and Install button.

While the Cluster is created, go to the cluster’s Overview page -> Features, parameter Logging should have a link to Kibana.

Features

Click link. In the opened window enter username/password from KubeConfig File that can be downloaded from the cluster’s Overview page.

Create an Index Pattern.

Navigate to Discover from the Menu and check logs.

Kibana

5. Change parameters to collect Logs for more than 7 days

Cluster installation phase

By default, the curator is configured to delete indexes older than 7 days. You can change this setting by specifying the retentionPeriodDays parameter in custom specifications. Here is an example for a platform:

spec:
  features:
    logging:
      sinks:
        -
          centralLogging:
            retentionPeriodDays: 30

Here is an example for a cluster with SelfHosted Elasticsearch:

spec:
  features:
    logging:
      sinks:
        -
          selfHosted:
            retentionPeriodDays: 30

To use custom specification click Customize Cluster Specification button instead of Confirm and Install when creating a platform or cluster.

After the cluster is deployed

Go to Config Maps. Open kublr-logging-curator-config. Click the Edit button and change unit_count from 7 to required value.

Curator

Note: You must independently calculate the resources necessary for your task and your environment.

6. Enabling X-Pack in Elasticsearch/Logstash/Kibana

By default, centralized logging is preconfigured to use ELK without X-Pack. If you want X-Pack installed, use custom cluster/platform specification to switch to images that includes X-Pack and set xpackEnable option equals true (add overwritten values under “logging” section of custom spec):

    logging:
      values:
        elasticsearch:
          image:
            name: elasticsearch/elasticsearch
          cluster:
            xpackEnable: true
        kibana:
          image:
            name: kibana/kibana
        logstash:
          image:
            name: logstash/logstash

To use custom specifications, click the Customize Cluster Specification button instead of Confirm and Install when creating a platform:

Config

7. Search Guard (ELK Multi-user access)

Kublr uses Search Guard Open Source security plugin to provide multi-user access to Elasticsearch & Kibana.

As the Community Edition is used, we implemented own Kublr roles provisioning mechanism to Search Guard, because AD/LDAP/etc are available in Search Guard Enterprise Edition only. The Kublr administrator is free from worries about configuring roles in Search Guard configuration files, except in complex cases.

Installation

By default, cetralized logging is preconfigured to use ELK without Search Guard.

To switch on Search Guard, please use following values in custom specification:

  features:
    logging:
      values:
         searchguard:
           enabled: true

Access control

Kublr manages Search Guard roles. As soon as new cluster created in some space, a new Search Guard role being created.

Role creates per space. It means all users who have ‘List’ access to some Kublr space resource, will be have access to all logs of all clusters of that space.

Searchguard rolesSearchguard indices

Kublr provides default index patterns for each space created. By default, there are kublr_default* and kublr* index patterns. The first can be used to see all logs of all clusters of ‘default’ space. The second one allows admin to get access to any logs, including logs of Kublr platform cluster.

As Kibana Multitenancy is part of Enterprise edition of Search Guard, there is no way to hide kublr* and other index patterns that cannot be accessed by user. But Search Guard restricts access on index layer and user will not get access to indexes belong to other spaces:

Searchguard no access

At the same time a user is granted to see logs of clusters of spaces, they also have access:

Searchguard Kibana overview

Roles customization

If it is necessary to specify permissions more narrowly, the administrator can modify the Search Guard configuration using the sgadmin utility. All necessary certificates are stored in the kublr-logging-searchguard secret of ‘kublr’ namespace of the platform cluster where centralized logging deployed.

There is a simple way to retrieve and apply Search Guard config using logging-controller pod:

$ kubectl exec -it -n kublr $(kubectl get pods -n kublr \
           -o=custom-columns=NAME:.metadata.name | grep logging-controller) /bin/bash
bash-4.4$ cd /home/centrolog
bash-4.4$ /opt/logging-controller/retrieve.sh
bash-4.4$ ls
sg_action_groups.yml   sg_config.yml  sg_internal_users.yml  sg_roles.yml  sg_roles_mapping.yml
#modify necessary files using vi
bash-4.4$ /opt/logging-controller/apply.sh

As Kublr manages space-based roles, do not use ‘kublr:’ prefix for your own roles. Please refer Search Guard documentation for guidance roles, roles mapping and other configurations.

Troubleshooting

In case of misunderstanding the restrictions or access rights, it is possible to track the interaction of Kublr and Search Guard.

First of all, research kublr-logging-kibana pod, logs of sg-auth-proxy container. The following entry contains information about the user and their roles:

2019/07/02 18:27:00.809099 proxy.go:108: User '383f7ac8-8e32-4157-99c8-221c28fc1417': 
          name=michael, roles=[uma_authorization user kublr:default]

Second, retrieve Search Guard configuration files, as described above.

If you’re unsure, what attributes are accessible you can always access the /_searchguard/authinfo endpoint to check. The endpoint will list all attribute names for the currently logged in user. You can use Kibana Dev Tools and request GET _searchguard/authinfo