Register externally provisioned clusters

Starting with version 1.18.0 Kublr platform supports registration and management of externally provisioned Kubernetes clusters.

This feature is in technical preview status in Kublr 1.18.0.

While Kublr cannot manage infrastructure (nodes, networks) and Kubernetes components configuration for such clusters as it does for Kublr-provisioned clusters, Kublr still provides multiple value add capabilities:

  • common view and repository of clusters in Kublr Platform,
  • Kublr cluster management API,
  • Kublr RBAC enables centralized management of user access to different clusters and cluster groups,
  • Kublr Kubernetes RBAC management UI provides administrators with a conveniet tools to manage Kubernetes RBAC,
  • Web console Kubernetes CLI,
  • Integration with centralized log collection and monitoring,
  • Addon features deployment: ingress controller, certmanager, in-cluster log collection and monitoring

Externally provistioned clusters requirements

The following requirements are to be satisfied for an externally provisioned clusters to be able to register them in Kublr Platform.

These requirements will be loosened in the future versions of Kublr.

  • Two pod priority classeses exist in the cluster, created as follows:
    kubectl create priorityclass kublr-default  --value=100
    kubectl create priorityclass kublr-critical --value=1000000
    
  • Privileged pods are allowed in the cluster;
  • (Optional) Kubernetes dashboard is deployed in the cluster, for example as
    kubectl apply -f \
        https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/recommended.yaml
    
  • the cluster Kubeconfig file current context is configured with the token credentials bound to cluster-admin cluster role in the cluster

Registering an external cluster in Kublr Platfrom

  • Prepare the cluster as described in the previous section

  • Create a new credentials object of type Kubeconfig Create Kubeconfig credentials

  • Create a new cluster of type External Cluster selecting the just created credential object and entering at least one Kubernetes API endpoint (in most cases it is the same URL as the one specified in the server field in the cluster’s kubeconfig file) Create external cluster

  • (Optinal) Enable and configure additional features

  • Click Confirm and Install button

Step-by-step guide for GKE and EKS

  1. Provision an external cluster and download its kubeconfig file

    • GKE

      • Provision a GKE Kubernetes cluster via Google console or gcloud CLI tools.

      • Download the cluster config into ./gke-kubeconfig file

        export KUBECONFIG="$(pwd)/gke-kubeconfig"
        
        gcloud container clusters get-credentials <cluster> --zone <zone> --project <project>
        
    • AWS EKS

      • Provision an AWS EKS Kubernetes cluster via AWS console or aws CLI tools.

      • Download the cluster config into ./eks-kubeconfig file

        export KUBECONFIG="$(pwd)/eks-kubeconfig"
        
        aws eks update-kubeconfig --name <cluster> --kubeconfig "${KUBECONFIG}"
        
  2. Prepare the cluster and the downloaded kubeconfig file for registration in Kublr Platform:

    • Create kublr service account associated with cluster-admin role, and other required objects in the cluster:

      # Create a service account
      kubectl create sa -n kube-system kublr
      
      # Create a cluster-admin cluster role binding for the service account
      kubectl create clusterrolebinding kublr-cluster-admin \
          --clusterrole=cluster-admin --serviceaccount=kube-system:kublr
      
      # Create required priority classes
      kubectl create priorityclass kublr-default  --value=100
      kubectl create priorityclass kublr-critical --value=1000000
      
    • For Kubernetes 1.24+, create service account token (See Kubernetes documentation):

      kubectl apply -f - <<EOF
      apiVersion: v1
      kind: Secret
      metadata:
        name: kublr-sa-token
        annotations:
          kubernetes.io/service-account.name: kublr
      type: kubernetes.io/service-account-token
      EOF
      
    • Create a new user section in the kubeconfig file with the service account’s token, a corresponding context, and make the context current:

      # Get the service account's token value from Kubernetes API
      SA_TOKEN="$(kubectl get secret -n kube-system "kublr-sa-token" \
          -o jsonpath='{.data.token}' | base64 -d)"
      
      # Create a user section with the token in the kubeconfig file
      kubectl config set-credentials kublr "--token=${SA_TOKEN}"
      
      # Create a context for the user section in the kubeconfig file
      kubectl config set-context kublr --user kublr \
          --cluster "$(kubectl config view -o 'jsonpath={.clusters[0].name}')"
      
      # Set the context as current
      kubectl config use-context kublr
      
    • (Optional) Deploy Kubernetes dashboard to the cluster if it is not available.

      kubectl apply -f \
          https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
      

      If deployed, Kubernetes dashboard will be available in Kublr UI.

      See Kubernetes documentation and Kubernetes dashboard github project for more details on Kubernetes dashboard deployment options.

    • (Optional) Deploy Kubernetes metrics server to the cluster if it is not available.

      kubectl apply -f \
          https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
      

      If Kubernetes metrics server is deployed, Kublr UI will display cluster resource usage.

      See Kubernetes documentation and Kubernetes metrics server github project for more details on Kubernetes metrics server deployment options.

  3. Register the cluster in Kublr Platform as described in the previous section.

Constraints and Limitations

  1. AWS Fargate does not support privileged pods, therefore some Kublr features will not be available on AWS EKS Fargate-only clusters, in particular web CLI and log collection.