Kublr Cluster Specification

1. Overview

The Kublr cluster specification is a metadata object maintained by the Kublr control plane and used to provide an abstract specification of a Kublr-managed Kubernetes cluster.

Standard serialization format of a Kublr cluster specification is YAML, although it may also be serialized as JSON, protobuf, or any other structured serialization format.

Two root object types of cluster spec are Secret and Cluster.

All root objects have similar high-level structure compatible with Kubernetes metadata structure:

kind: Secret # object kind
metadata:
  name: secret1 # object name
spec:
  # ...

Usually the Kublr cluster definition file is a yaml file that contains one or more Secret objects and one Cluster object, e.g:

kind: Secret
metadata:
  name: secret1
spec:
  awsApiAccessKey:
    accessKeyId: '...'
    secretAccessKey: '...'
---
kind: Cluster
metadata:
  name: 'my-cluster'
spec:
  # ...

When cluster specification is provided in Kublr UI, secret objects MUST NOT be included. In this case Secret objects must be managed in UI and only referred to from the cluster spec.

2. Secret object

Secrets are used to store and pass various credentials to the generator.

Different types of secrets may be used in different circumstances.

Currently the following types of secrets are supported:

AWS account credentials

kind: Secret
metadata:
  name: secret1
spec:
  awsApiAccessKey:
    accessKeyId: '...'
    secretAccessKey: '...'

Azure account credentials

kind: Secret
metadata:
  name: secret1
spec:
  azureApiAccessKey:
    subscriptionId: '...'
    tenantId: '...'
    aadClientId: '...'
    aadClientSecret: '...'

SSH key

kind: Secret
metadata:
  name: secret1
spec:
  sshKey:
    sshPublicKey: '...'

3. Cluster definitions

Main elements of cluster definition include:

  • Kublr version information

  • Agent discovery information

  • Overlay network specification

  • Locations specifications

  • Secret store specification

  • Master instances group specification

  • Node instances groups specifications

  • Kublr agent custom configuration

  • Features configuration (deprecated)

    In the first versions of Kublr features such as ingress, logging, monitoring etc also configured in a cluster specification. This capability is deprecated now and will not be documented here.

3.1. Kublr version

The Kublr version is set in a field kublrVersion of cluster specification. In case the field is not defined in the input file, the generator will set its own version.

spec:
  kublrVersion: '1.9.2-ga2'
  # ...

3.2. Agent discovery and instance initialization

Every time a new node or master instance starts in a Kublr-managed Kubernetes cluster, it requires the start of the Kublr agent daemon. Kublr agent has to be available on the instance for it to function normally. Kublr supports several methods of placing agent to cluster instances depending on environment.

Agent discovery and instance initialization methods include:

  • relying on pre-installed software (e.g. pre-configured AMI in AWS EC2 or VHD in Azure),

  • downloading Kublr agent from a web location and installing required software after instance start.

3.2.1. Run cluster using pre-configured AMI in AWS

Specific AMI ID may be provided for cluster instances via node instance group properties spec.master.locations[].aws.overrideImageId (for master instances) and spec.nodes[].locations[].aws.overrideImageId (for node instances).

If the provided AMIs includes Kublr software installed, it is all that is needed for a cluster to start.

kind: Cluster
metadata:
  name: 'my-cluster'
spec:
  # ...
  master:
    locations:
    - # ...
      aws:
        overrideImageId: ami-12345678
      # ...
  # ...
  nodes:
  - name: 'node-group'
    locations:
    - # ...
      aws:
        overrideImageId: ami-12345678
      # ...
  # ...

3.2.2. Run cluster using pre-configured VHD in Azure

Specific AMI ID may be provided for cluster instances via node instance group properties spec.master.locations[].azure.osDisk (for master instances) and spec.nodes[].locations[].azure.osDisk (for node instances).

If the provided disk includes Kublr software installed, it is all that is needed for a cluster to start.

kind: Cluster
metadata:
  name: 'my-cluster'
spec:
  # ...
  nodes:
  - name: 'node-group'
    locations:
    - # ...
      azure:
        osDisk:
          type: vhd # image | vhd | managedDisk
          imageId: '...'
          imagePublisher: '...'
          imageOffer: '...'
          imageVersion: '...'
          sourceUri: '...'
      # ...
  # ...

3.2.3. Get Kublr agent in runtime from a web URL

Agent can be downloaded from a URL and setup run on each instance during initialization.

This slows down instance initialization slightly, but allows using general purpose VM images for KUbernetes cluster.

kind: Cluster
metadata:
  name: 'my-cluster'
spec:
  # ...

  # Use dynamic instance initialization with Kublr agent downloaded from a repository
  kublrAgentTgzUrl: 'https://nexus.ecp.eastbanctech.com/...'
  kublrAgentRepositoryUsername: '...'
  kublrAgentRepositoryPassword: '...'

  # ...

3.3. Overlay network

Overlay network section configures Kubernetes cluster CNI overlay network plugin and overlay network properties.

spec:
  # ...
  network:
    provider: 'cni-canal'
    clusterCIDR: '100.64.0.0/10'
    serviceCIDR: '100.64.0.0/13'
    podCIDR: '100.96.0.0/11'
    masterIP: '100.64.0.1'
    dnsIP: '100.64.0.10'
    dnsDomain: 'cluster.local'
  # ...

The example shows default values for Kublr cluster network properties.

3.4. Locations

Kublr cluster may use several “locations”, each of which represents a “significantly separate” environment. Interpretation of location depends on specific infrastructure provider. For example an AWS location is characterized by an AWS account and region. Same is true for Azure location.

See locations section for more information

spec:
  # ...
  locations:
  - name: '...'
    # ...

Current Kublr version only supports single-location clusters for fully automatic initialization and operations. Multi-location clusters may be configured and deployed with a level of additional manual configuration.

Three types of locations are currently supported: aws, azure, and baremetal.

3.4.1. AWS location

AWS location is mapped into a cloud formation stack created by Kublr in one AWS region under a specific AWS account.

Most of AWS location specification parameters are optional and Kublr uses a number of smart defaults and rules to fill in the gaps. Only mandatory fields of AWS location spec are an AWS region ID and a reference to the secret object containing AWS access key and secret key.

Thus minimal AWS location spec definition may look as follows:

spec:
  # ...
  locations:
  - name: 'aws1'
    aws:
      awsApiAccessSecretRef: 'secret1'
      region: 'us-east-1'
  # ...

The following documented example of AWS location definition describes all available parameters:

spec:
  # ...
  locations:
  - name: 'aws1'
    aws:
      # Reference to the secret object containing AWS access key and secret key
      # to access this location
      #
      # Required
      #
      awsApiAccessSecretRef: 'secret1'

      # AWS region
      #
      # Required
      #
      region: 'us-east-1'

      # AWS accountId
      #
      # Optional
      #
      # If omitted, it will be populated automatically based on the secret.
      # If specified, it must correspond to the account ID of the provided AWS
      # secret
      #
      accountId: '1234567890';

      # VPC Id
      #
      # Optional
      #
      # If omitted, a new VPC will be created, otherwise existing VPC will be
      # used.
      #
      vpcId: 'vpc-12345'

      # Ip address range for instances in this VPC.
      #
      # Optional
      #
      # If omitted, one of 16 standard private /16 IP
      # ranges (172.16.0.0/16, ... , 172.31.0.0/16) will be assigned.
      #
      vpcCidrBlock: '172.16.0.0/16'

      # AWS region availability zones to be used for Kubernetes cluster in this
      # location.
      #
      # Optional
      #
      # If omitted, it will be populated automatically to all zones available
      # for this account in this region
      #
      availabilityZones:
        - us-east-1b
        - us-east-1f

      # CIDR block allocation for various purpose subnets in this location.
      #
      # Optional
      #
      # This replaces deprecated properties masterCIDRBlocks, nodesCIDRBlocks,
      # and publicSubnetCidrBlocks
      #
      # CIDR blocks in the following arrays are specified according to
      # availability zone indices.
      #
      # Availability zone index is the index of the zone in the list of all
      # possible zones in this region, ordered in a standard
      # lexicographical order. E.g. zones 'us-east-1a', 'us-east-1c', and
      # 'us-east-1d' have indices 0, 2, and 3 correspondingly.
      #
      # Therefore, for example, if three public masters are defined, and two
      # masters are placed in the zone 'us-east-1b' (zone index is 1) and one
      # master is placed in the zone 'us-east-1d' (zone index is 3), then at
      # least the following CIDRs must be specified:
      #
      # masterPublic:
      #   - ''
      #   - '<cidr for master subnet in zone us-east-1b>'
      #   - ''
      #   - '<cidr for master subnet in zone us-east-1d>'
      #
      # Each value in these arrays must either be a valid CIDR or an empty
      # string (if unused or undefined).
      #
      # Generator will use its own set of rules when trying to specify CIDR
      # blocks that are needed but undefined in the spec.
      # It will not try to adjust these rules to accommodate user-specified
      # CIDR's.
      #
      # Automatic CIDR generation rules on an example of 172.16.0.0/16 global CIDR:
      #  - 172.16.0.0/17 - reserved for public subnets
      #    - 172.16.0.0/20 - reserved for public master and other subnets
      #      - 172.16.0.0/23 - reserved for various non-master/auxilary public subnets
      #        - 172.16.0.0/26 - reserved
      #        - 172.16.0.64/26, ... , 172.16.1.192/26 - allocated for otherPublic
      #                                                  (zones 0, 1, ... , 6)
      #                                                  (7 blocks of 64 IPs each)
      #      - 172.16.2.0/23, ... , 172.16.14.0/23 - allocated for masterPublic
      #                                              (zones 0, 1, ... , 6)
      #                                              (7 blocks 512 of IPs each)
      #    - 172.16.16.0/20, ... , 172.16.112.0/20 - allocated for nodePublic
      #                                              (zones 0, 1, ... , 6)
      #                                              (7 blocks of 16K IPs each)
      #  - 172.16.128.0/17 - reserved for private subnets
      #    - 172.16.128.0/20 - reserved for private master and other subnets
      #      - 172.16.128.0/23 - reserved for various non-master/auxilary private subnets
      #      - 172.16.130.0/23, ... , 172.16.142.0/23 - allocated for masterPrivate
      #                                                 (zones 0, 1, ... , 6)
      #                                                 (7 blocks of 512 IPs each)
      #    - 172.16.144.0/20, ... , 172.16.240.0/20 - allocated for nodePrivate
      #                                               (zones 0, 1, ... , 6)
      #                                               (7 blocks of 16K IPs each)
      #
      cidrBlocks:
        # CIDR blocks for subnets used for public master groups
        masterPublic:
          - ''
          - '172.16.4.0/23'
          - ''
          - ''
          - ''
          - '172.16.12.0/23'
        # CIDR blocks for subnets used for private master groups
        masterPrivate:
          - ''
          - '172.16.132.0/23'
          - ''
          - ''
          - ''
          - '172.16.140.0/23'
        # CIDR blocks for subnets used for public node groups
        nodePublic:
          - ''
          - '172.16.32.0/20'
          - ''
          - ''
          - ''
          - '172.16.96.0/20'
        # CIDR blocks for subnets used for private node groups
        nodePrivate:
          - ''
          - '172.16.144.0/20'
          - ''
          - ''
          - ''
          - '172.16.208.0/20'
        # CIDR blocks used for public subnets necessary for other purposes (e.g.
        # placing NAT and bastion host in situation when no other public subnets
        # exist)
        otherPublic:
          - ''
          - '172.16.0.128/26'
          - ''
          - ''
          - ''
          - '172.16.1.128/26'

      # The following properties specify AWS IAM roles and instance profiles for
      # master and node instances.
      #
      # Optional
      #
      # Supported in Kublr >= 1.10.2
      #
      # If defined then existing AWS IAM objects will be used.
      #
      # If not defined then Kublr will create corresponding objects (roles and
      # instance profiles).
      #
      # If role is created by Kublr, its path and name will be in the following
      # format: '/kublr/<cluster-name>-<location-name>-(master|node)'
      #
      iamRoleMasterPathName: '/kublr/master-role-A5FF3GD'
      iamInstanceProfileMasterPathName: null
      iamRoleNodePathName: 'node-role'
      iamInstanceProfileNodePathName: null

      # If this property is set to true, Kublr will enable the cluster cloudformation
      # stack termination protection.
      #
      # Optional
      #
      # Supported in Kublr >= 1.10.2
      #
      enableTerminationProtection: true

      # Skip creating security groups of different types.
      #
      # Optional
      #
      # Supported in Kublr >= 1.10.2
      #
      # Kublr can automatically create security groups for cluster instances.
      # In some situations it is not desirable or allowed, in which case the following
      # properties can be used to skip automatic security groups creation.
      #
      # See also 'existingSecurityGroupIds' properties in AWS location and node groups'
      # AWS location objects.
      #
      skipSecurityGroupDefault: true
      skipSecurityGroupMaster: true
      skipSecurityGroupNode: true

      # GroupId of existing security groups that need to be added to all instances.
      #
      # Optional
      #
      # Supported in Kublr >= 1.10.2
      #
      # More security groups may be added to specific node groups by specifying additional
      # GroupIds in 'securityGroupId' property of specific groups' 'AWSInstanceGroupLocationSpec'
      # objects.
      #
      existingSecurityGroupIds:
        - 'sg-835627'
        - 'sg-923835'

    # Kublr agent and Kubernetes customization parameters for this location
    #
    # Optional
    #
    # see "Custom Kublr agent configuration" for more details
    #
    kublrAgentConfig: ...
    # ...

3.4.2. Azure location

Azure location is mapped into a resource group and a deployment objects created by Kublr in one Azure region under a specific Azure subscription.

Most of Azure location specification parameters are optional and Kublr uses a number of smart defaults and rules to fill in the gaps. Only mandatory fields of Azure location spec are an Azure region ID, and references to the secret objects containing Azure access credentials and SSH key necessary to initialize cluster instances.

Thus minimal Azure location spec definition may look as follows:

spec:
  # ...
  locations:
  - name: 'azure1'
    azure:
      azureApiAccessSecretRef: 'secret1'
      region: 'eastus'
      azureSshKeySecretRef: 'secretSsh1'
  # ...

The following documented example of Azure location definition describes all available parameters:

spec:
  # ...
  locations:
  - name: 'azure1'
    azure:
      # Reference to the secret object containing Azure secrets to access to
      # access this location
      #
      # Required
      #
      azureApiAccessSecretRef: 'secret1'

      # Azure region
      #
      # Required
      #
      region: 'eastus'

      # Reference to the secret object containing public SSH key
      #
      # Required
      #
      azureSshKeySecretRef: 'secretSsh1'

      # Azure Resource Group
      #
      # Optional
      #
      # If omitted, a new Resource Group will be created, otherwise an existing
      # one will be used.
      #
      resourceGroup: 'my-resource-group'

      # Azure Network Security Group.
      #
      # Optional
      #
      # If omitted, a new Network Security Group will be created, otherwise an
      # existing one will be used.
      #
      networkSecurityGroup: 'sg1'

      # Azure Route Table.
      #
      # Optional
      #
      # If omitted, a new Route Table will be created, otherwise existing will
      # be used.
      #
      routeTable: 'rt1'

      # Azure Storage Account type (i.e. Standard_LRS, Premium_LRS and etc)
      #
      # Optional
      #
      # If omitted - default of 'Standard_LRS' will be used.
      #
      storageAccountType: 'Standard_LRS'

      # Azure Virtual Network
      #
      # Optional
      #
      # If omitted, a new Virtual Network will be created, otherwise existing
      # will be used.
      #
      virtualNetwork: 'vn1'

      # Azure Virtual Network Subnet
      #
      # Optional
      #
      # If omitted, a new Virtual Network Subnet will be created, otherwise
      # existing will be used.
      #
      virtualNetworkSubnet: 'vns1'

      # Ip address range for instances in this Virtual Network Subnet.
      #
      # Optional
      #
      # If omitted - default will be assigned.
      #
      virtualNetworkSubnetCidrBlock: ...

    # Kublr agent and Kubernetes customization parameters for this location
    #
    # Optional
    #
    # see "Custom Kublr agent configuration" for more details
    #
    kublrAgentConfig: ...
    # ...

3.4.3. Baremetal location

Bare metal location corresponds to a number of pre-existing instances that are provisioned outside of Kublr.

Whne bare metal location is used in cluster specification, Kublr will not provision the specified instances, but rather will provide administrators with initialization command lines that need to be executed on the specified instances to connect them to the cluster.

Bare metal location spec does not contain any parameters at the moment, and its definition may look as follows:

spec:
  # ...
  locations:
  - name: 'datacenter1'
    baremetal: {}
  # ...

3.5. Secret store

Kublr cluster instances (both masters and nodes) need access to a certain set of coordinated certificates and keys to work together. Distribution of such keys is ensured via mechanism of “secret storage.”

Secret storage type and its configuration is specified in secretStore section of a cluster specification:

spec:
  # ...
  secretStore:
  # ...

Kublr supports several types of secret storage: AWS S3, Azure storage account, and file directory.

When secret store specification is omitted from the cluster specification, Kublr will generate suitable default. The first location that contains master instances will be used as a default location for the secret store, and the type of location will be used to define the type of the secret store.

For example, if the first location with master instances is of AWS type, default secret store will be AWS S3.

3.5.1. AWS S3 secret store

The following documented example of AWS S3 secret store definition describes all available parameters:

spec:
  # ...
  secretStore:
    awsS3:
      # Reference to the location object in which the secret store S3 bucket is
      # created or located
      #
      # Optional
      #
      # If omitted, the first AWS location that contains master instances will
      # be used. If there are no AWS location with master instances, the first
      # AWS location in the cluster spec will be used.
      #
      locationRef: 'aws1'

      # Secret store S3 bucket name.
      #
      # Optional
      #
      # If omitted, Kublr will generate a name for the bucket using the cluster
      # name as the name base
      #
      s3BucketName: 'kublr-k8s-cluster1-secret-store-bucket'
  # ...

3.5.2. Azure storage account secret store

The following documented example of AWS S3 secret store definition describes all available parameters:

spec:
  # ...
  secretStore:
    azureAS:
      # Reference to the location object in which the secret store storage
      # account is created or located
      #
      # Optional
      #
      # If omitted, the first Azure location that contains master instances will
      # be used. If there are no Azure location with master instances, the first
      # Azure location in the cluster spec will be used.
      #
      locationRef: 'azure1'

      # Whether to use existing storage account or create a new one.
      #
      # Optional
      #
      # Default - false
      #
      useExisting: false

      # Secret store storage account and container names.
      #
      # Optional
      #
      # If omitted, Kublr will generate randomized names using the cluster name
      # as the name base
      #
      storageAccountName: 'storageaccount1'
      storageContainerName: 'container1'
  # ...

3.5.3. Bare metal secret store type

The following is a documented example of a bare metal secret store definition.

This definition is just a marker definition for Kublr to use control plane for secret initialization. The control plane will generate cluster secrets and include them into Kublr agent installation package.

spec:
  # ...
  secretStore:
    baremetal: {}
  # ...

3.6. Instance groups

Kublr allows creation of clusters with multiple groups of nodes.

There must always be one designated instance group for master instances in the cluster spec, and then there may be any number of node instance groups defined.

Master instance group is defined in spec.master section of the cluster spec, and node instance groups are defined in spec.nodes section of the cluster spec:

spec:
  # ...
  master: # master instance group definition
  # ...
  nodes: # node instance groups definitions
  - name: nodeGroup1
    # ...
  - name: nodeGroup2
    # ...

3.6.1. General instance group structure

The following is a documented example of an instance group definition:

spec:
  # ...
  nodes:
  -
    # Instance group name.
    #
    # Optional
    #
    # For masters, it must always be 'master'
    # If omitted, it will be set to 'master' for a master instance group, and to
    # 'default' for a node instance group.
    # There may not exist to instance groups with the same names within a
    # cluster.
    name: 'group1'

    # Kublr variant to use for this group. Kublr variant ID is interpreted
    # differently in different contexts; e.g. it may be used to designate
    # an OS flavor and version (Ubuntu 16.04, RHEL 7.3 etc) and be mapped onto
    # different base VM images used for this group instances.
    # may be used
    #
    # Optional
    # Deprecated
    #
    # If omitted, generator will try to assign it automatically.
    #
    kublrVariant: 'Ubuntu 16.04'

    # Group instance number parameters
    # MUST: minNodes <= initialNodes <= maxNodes
    # If only one of the three parameters is specified, its value will be used
    # to initialize other two.
    minNodes: 3
    initialNodes: 3
    maxNodes: 7

    # Whether this group size is managed by Kubernetes autoscaler or not
    autoscaling: false

    # Whether this group is stateful or not
    #
    # Instances in stateful groups are assigned unique identifiers - ordinal
    # numbers
    #
    # Non-stateful group instances are fully symmetrical and undistinguishable.
    #
    # Master group must always be stateful; Kublr will automatically set this
    # property to true for the master group.
    #
    stateful: false

    # The list of location specific parameters for this instance group.
    # Only one location per group is currently supported.
    # In the future it is possible that multiple locations will be allowed, but
    # only for master instance groups.
    #
    # Optional
    #
    # If omitted, generator will try to assign it automatically to the first
    # available location.
    locations:
    - locationRef: aws1
      # The following section includes instance group location type specific
      # parameters.
      # Only one of `aws`, `azure` or `baremetal` must be included, and the type
      # of the section MUST correspond to the referred location type.
      aws: ...
      azure: ...
      baremetal: ...

    # Kublr agent and Kubernetes customization parameters for this location
    #
    # Optional
    #
    # see "Custom Kublr agent configuration" for more details
    #
    kublrAgentConfig: ...

3.6.2. AWS instance group location parameters

The following is a documented example of an AWS instance group type specific parameters:

spec:
  # ...
  nodes:
  - name: 'group1'
    # ...
    locations:
    - locationRef: aws1
      aws:
        # ID of an existing SSH key pair to setup for the instances in this
        # group
        #
        # Optional
        #
        # If not specified, no SSH access will be configured
        #
        sshKey: 'my-aws-ssh-key'

        # Availabilty zones to limit this group to.
        #
        # Optional
        #
        # If defined, this list must be a subset of all zones available in
        # corresponding location.
        #
        # If omitted, generator will automatically assign it to all available
        # zones (or all zones specified in corresponding location).
        #
        # 'availabilityZones' array may include non-unique entries, which may make
        # sense for master node groups, or in cases where corresponding node group
        # must be associated with multiple subnets in the same availability zone
        # (see notes for 'subnetIds' property).
        #
        availabilityZones:
          - us-east-1b
          - us-east-1f

        # Subnet Ids
        #
        # Optional
        #
        # Supported in Kublr >= 1.10.2
        #
        # If omitted, subnets will be created to accommodate this instance group,
        # otherwise corresponding autoscaling group will be assigned to the specified
        # subnets.
        #
        # 'subnetIds' array elements must correspond to each AZ in 'availabilityZones'
        # array, so that for example, if
        # `availabilityZones == ['us-east1a', 'us-east-1c', 'us-east-1d']` and
        # `subnetIds == ['subnet1', '', 'subnet3']`, then Kublr will assume
        # that 'subnet1' exist in AZ 'us-east-1a', 'subnet3' exists in 'us-east-1d',
        # and it will create a new subnet in 'us-east-1c'.
        #
        # Note also that if a subnet id is specified in a certain position of
        # 'subnetIds' array, a correct AZ in which this subnet is located MUST also
        # be specified in corresponding position of 'availabilityZones' array.
        #
        subnetIds:
          - ''
          - 'subnet-93292345'

        # GroupId of existing security groups that need to be added to this node group instances.
        #
        # Optional
        #
        # Supported in Kublr >= 1.10.2
        #
        # These security groups are in addition to security groups specified in
        # 'securityGroupId' property in corresponding AWS location object.
        #
        existingSecurityGroupIds:
          - 'sg-835627'
          - 'sg-923835'

        # AWS instance type
        instanceType: 't2.medium'

        # AMI id to use for instances in this group
        #
        # Optional
        #
        # If omitted, Kublr will try to locate AMI based on other parameters,
        # such as Kublr version, AWS region, Kublr variant etc
        overrideImageId: 'ami-123456'

        # Actual AMI ID used for this group
        # It does not need to be provided by the user; Kublr will fill it from
        # `overrideImageId` or based on image discovery rules and information.
        imageId: 'ami-123456'

        # Image root device name to use for this AMI
        #
        # Optional
        #
        # Kublr will request this value via AWS EC2 API
        imageRootDeviceName: '/dev/xda1'

        # root volume parameters (for all instance groups) and master etcd data
        # storage volume parameters (only valid for master instance group)
        #
        # Optional
        #
        rootVolume:
          # same structure as for `masterVolume` below
        masterVolume:
          # AWS EBS type
          #
          # Optional
          #
          type: 'gp2'

          # AWS EBS size in GB
          #
          # Optional
          #
          size: 48

          # AWS EBS iops (for iops optimized volumes only)
          #
          # Optional
          #
          iops: 300

          # Encrypted flag indicates if EBS volume should be encrypted.
          #
          # Optional
          #
          encrypted: false

          # The Amazon Resource Name (ARN) of the AWS Key Management Service
          # master key that is used to create the encrypted volume, such as
          # `arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef`.
          # If you create an encrypted volume and don’t specify this property,
          # AWS CloudFormation uses the default master key.
          #
          # Optional
          #
          kmsKeyId: 'arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef'

          # Snapshot ID to create EBS volume from
          #
          # Optional
          #
          snapshotId: 'snap-12345678'

          # deleteOnTermination property for ASG EBS mapping volumes
          #
          # Optional
          #
          deleteOnTermination: false

        # Master ELB allocation policy.
        # Only valid for the master instance group.
        # Allowed values:
        #   'default' ('public' for multi-master, and 'none' for single-master)
        #   'none' - no ELB will be created for master API
        #   'private' - only private ELB will be created for master API
        #   'public' - only public ELB will be created for master API
        #   'privateAndPublic' - both private and public ELBs will be created for master API
        #
        # Optional
        #
        masterElbAllocationPolicy: 'default'

        # Instance IP allocation policy.
        # Allowed values:
        #   'default' (same as 'public')
        #   'public' - public IPs will be assigned to this node group instances
        #              in AWS EC2 private IPs will also be assigned
        #   'private' - public IPs will be assigned to this node group instances
        #   'privateAndPublic' - in AWS EC2 it is equivalent to `public` above
        #
        # Optional
        #
        nodeIpAllocationPolicy: 'default'

        # Groups EIP allocation policy - 'default', 'none', or 'public'.
        #
        # In addition to setting up AWS policy for managing dynamic IPs to
        # public or private, Kublr can automatically associate fixed Elastic IPs
        # but only for stateful instance groups.
        #
        # 'default' means:
        # - 'none' for multi-master groups (note that master groups are always stateful)
        # - 'none' for single-master groups with nodeIpAllocationPolicy==='private'
        # - 'public' for single-master groups with nodeIpAllocationPolicy!=='private'
        # - 'none' for stateful node groups with nodeIpAllocationPolicy==='private'
        # - 'public' for stateful node groups with nodeIpAllocationPolicy!=='private'
        # - 'none' for non-stateful node groups
        #
        # Constraints:
        # - eipAllocationPolicy may not be 'public' if nodeIpAllocationPolicy=='private'
        # - eipAllocationPolicy may not be 'public' if the group is not stateful
        #
        # Optional
        #
        eipAllocationPolicy: 'default'

        # AWS AutoScalingGroup parameters:
        # - Cooldown
        # - LoadBalancerNames
        # - TargetGroupARNs
        #
        # See AWS CloudFormation and EC2 documentation for more details
        #
        # Optional
        #
        cooldown: ...
        loadBalancerNames: ...
        targetGroupARNs:
          - ...

        # AWS LaunchConfiguration parameters:
        # - BlockDeviceMappings
        # - EbsOptimized
        # - InstanceMonitoring
        # - PlacementTenancy
        # - SpotPrice
        #
        # See AWS CloudFormation and EC2 documentation for more details
        #
        # Optional
        #
        blockDeviceMappings:
          - deviceName: ...
            ebs:
              # same structure as `masterVolume` and `rootVolume` above
            noDevice: false
            virtualName: ...
        ebsOptimized: false
        instanceMonitoring: false
        placementTenancy: ...
        spotPrice: ...

3.6.3. Azure instance group location parameters

The following is a documented example of an Azure instance group definition:

spec:
  # ...
  nodes:
  - name: 'group1'
    # ...
    locations:
    - locationRef: azure1
      azure:
        # SSH public key to setup on Azure instances
        sshKey: '...'

        # Where to use availability set for this instance group
        isAvailabilitySet: true

        # Azure instance type
        instanceType: ...
        # root Azure disks parameters

        osDisk:
          # Type may be 'image', 'vhd', or 'managedDisk'
          type: 'image'
          imageId: ...
          imagePublisher: ...
          imageOffer: ...
          imageVersion: ...
          sourceUri: ...

        # etcd data store Azure disks parameters
        masterDataDisk:
          diskSizeGb: 48
          lun: 0

3.6.4. Baremetal instance group location parameters

The following is a documented example of a baremetal instance group definition:

spec:
  # ...
  nodes:
  - name: 'group1'
    # ...
    locations:
    - locationRef: datacenter1
      baremetal:
        # The list of instances in this bare metal instance group
        hosts:
          - address: inst1.group1.vm.local
          - address: 192.168.33.112

        # Address of a load balancer for the Kubernetes master API
        # if set, it should be provisioned outside Kublr and pointed to the
        # cluster master instances Kubernetes API ports.
        loadBalancerAddress: ...

3.7. System docker images customization

Kublr requires a number of “system” Kublr and Kubernetes docker images to function normally.

By default these images are taken from a number of public docker image registries: Docker Hub, Google GCR, Quay.io etc

To enable creation of clusters in fully network isolated environment, Kublr allows specifying substitution docker registries and docker image substitution in the cluster spec.

spec:
  # ...
  dockerRegistry:
    # Substitution registry authentication
    auth:
    -
      # Registry address
      registry: 'my-quay-proxy.intranet'

      # Reference to the username/password secret object
      secretRef: 'my-quay-proxy-secret'

    # Registry override definitions
    override:
      # Default override registry
      default: 'my-registry.intranet'

      # docker.io override
      docker_io: ...

      # gcr.io override
      gcr_io: ...

      # k8s.gcr.io override
      k8s_gcr_io: ...

      # quay.io override
      quay_io: 'my-quay-proxy.intranet'

Individual image substitution may be achieved via Kublr agent configuration customization in the sections kublr.version.<image-id> and kublr.image.<image-id>. See Kublr agent configuration reference for more details.

3.8. Custom Kublr agent configuration

Custom Kublr agent configuration parameters may be added to the cluster specification on different levels:

  • cluster

  • location

  • instance group (for masters and nodes)

  • instance group location (for masters and nodes)

Configuration flags defined on more specific (lower) levels override flags defined on more general (higher) levels.

spec:
  # ...
  kublrAgentConfig:
    # ...
  locations:
  - # ...
    kublrAgentConfig:
      # ...
  master:
    kublrAgentConfig:
      # ...
    locations:
    - # ...
      kublrAgentConfig:
        # ...
  nodes:
  - name: default
    kublrAgentConfig:
      # ...
    locations:
    - # ...
      kublrAgentConfig:
        # ...

4. Full cluster definition example

Minimal AWS cluster definition, most of the parameters are filled by default:

kind: Secret
metadata:
  name: aws-secret1
spec:
  awsApiAccessKey:
    accessKeyId: '...'
    secretAccessKey: '...'

---

metadata:
  name: cluster-1527283049

spec:
  locations:
    - name: aws1
      aws:
        region: us-east-1
        awsApiAccessSecretRef: aws-secret1

  master:
    minNodes: 1
    kublrVariant: aws-ubuntu-16.04

  nodes:
    - kublrVariant: aws-ubuntu-16.04
      minNodes: 2

Slightly more detailed AWS cluster definition, with availability zones specified:

kind: Secret
metadata:
  name: aws-secret1
spec:
  awsApiAccessKey:
    accessKeyId: '...'
    secretAccessKey: '...'

---

metadata:
  name: cluster-1527283049
spec:
  locations:
    - name: aws1
      aws:
        region: us-east-1
        awsApiAccessSecretRef: aws-secret1
        availabilityZones:
          - us-east-1e
          - us-east-1f

  master:
    minNodes: 1
    kublrVariant: aws-ubuntu-16.04
    locations:
      - locationRef: aws1
        aws:
          instanceType: t2.medium
          availabilityZones:
            - us-east-1e

  nodes:
    - kublrVariant: aws-ubuntu-16.04
      minNodes: 2
      locations:
        - locationRef: aws1
          aws:
            instanceType: t2.large
            availabilityZones:
              - us-east-1e
              - us-east-1f

Questions? Suggestions? Need help? Contact us.