Kubernetes Cluster Hardware Recommendations

Overview

This document covers the minimal hardware recommendations for the Kublr Platform and a Kublr Kubernetes cluster. Once read, you can proceed with the deployment of the Kublr Platform and a Kubernetes cluster.

Kublr Kubernetes Cluster Requirements

Role Minimal required memory Minimal required CPU (cores) Components
Master node 2.8 GB 1.5 Kublr-Kubernetes master components (k8s-core, cert-updater, fluentd, kube-addon-manager, rescheduler, network, etcd, proxy, kubelet)
Worker node 1 GB 0.5 Kublr-Kubernetes worker components (fluentd, dns, proxy, network, kubelet)
Centralized monitoring agent * 2 GB 0.7 Prometheus. We recommend limit 2GB for typical installation of managed cluster which has 8 working, 40 pods per node with total 320 nodes. Retention period for prometheus agent is 1 hour.
Centralized logging agent * 0.5 GB 0.4 Rabbitmq

Kublr Platform Feature Requirements

Feature Required memory Required CPU
Feature: Control Plane 1.9 GB 1.2
Feature: Centralized monitoring 5 GB 1.2
Feature: Centralized logging 11 GB 1.4
Feature: k8s core components 0.5 GB 0.15

Kublr Platform Deployment Example

Single master kubernetes cluster, at one-two worker nodes, use all Kublr’s features (two for basic reliability)

For a minimal Kublr Platform installation you should have one master node with 4GB memory and 2 CPU and worker node(s) with total 10GB + 1GB × (number of nodes) and 4.4 + 0.5 × (number of nodes) CPU cores.

Please note: We do not recommend using this configuration in production but this configuration is suitable to start exploring the Kublr Platform.

Provider Master Instance Type Worker Instance Type
Amazon Web Services t2.medium/t3.medium (2 vCPU, 4GB) 2 × t2(t3) xlarge (4 vCPU, 16GB)
Google Cloud Platform n1-standard-2 (2 vCPU, 7.5GB) 2 × n1-standard-4 (4 vCPU, 15GB)
Microsoft Azure A2 v2 (2 vCPU, 4GB) 2 × A8 v2 (8 vCPU, 16GB)
On-premises 2 vCPU, 5GB 2 × VM (3 vCPU, 10GB)

Workload Example

Master node: Kublr-Kubernetes master components (2.8GB, 1.5 vCPU),

Worker node 1: Kublr-Kubernetes worker components (1GB, 0.5 vCPU), Feature: ControlPlane (1.9GB, 1.2 vCPU), Feature: Centralized monitoring (5 GB, 1.2 vCPU) Feature: k8s core components (0.5 GB, 0.15 vCPU) Feature: Centralized logging (11GB, 1.4 vCPU)

Worker node 2: Kublr-Kubernetes worker components (1GB, 0.5 vCPU), Feature: Centralized logging (11GB, 1.4 vCPU)

Self-Hosted Features

Kublr has several self-hosted features, which could be installed separated in Kublr-Kubernetes clusters.

Feature Required memory Required CPU
Self-hosted logging 9GB 1
Self-hosted monitoring 2.8GB 1.4

Calculating Needed Memory and CPU Availability for Business Applications

Note: By default Kublr disables scheduling business application on the master (you can change that), so we use only worker nodes in our formula.

Available memory = (number of nodes) × (memory per node) - (number of nodes) × 1GB - (has Self-hosted logging) × 9GB - (has Self-hosted monitoring) × 2.9GB - 0.4 GB - 2GB (Central monitoring agent per every cluster) - 0.3 (Central logging agent per every cluster).

Available CPU = (number of nodes) × (vCPU per node) - (number of nodes) × 0.5 - (has Self-hosted logging) × 1 - (has Self-hosted monitoring) × 1.4 - 0.1 - 0.7 (Central monitoring agent per every cluster) - 0.3 (Central logging agent per every cluster).

Example

User wants to create a Kublr-Kubernetes cluster with 5 n1-standard-4 nodes (in Google Cloud Platform) with enabled self-hosted logging, but disabled self-hosted monitoring, then:

  • Available memory = 5 × 15 - 5 × 1 - yes ×9 - no × 2.8 - 0.4 - 2 - 0.3 = 58.3GB.
  • Available CPU = 5 × 4 - 5 × 0.5 - yes × 1 - no × 1.4 - 0.1 - 0.7 - 0.3 = 15.4 vCPUs.

Note: You will use centralized monitoring available in the Kublr Platform instead of self-hosted monitoring