Using existing AWS resources

1. Overview

Kublr always creates Kubernetes clusters in AWS in a separate Cloud Formation stack.

In most cases this stack includes VPC, subnets, security groups, IAM roles and instance profiles, S3 bucket, and many other AWS resources necessary for a functional Kubernetes cluster.

In some situations some of these objects already exist and creating them is not necessary, is not desirable, or is not permitted.

Kublr allows reusing existing AWS resources for new Kubernetes clusters via cluster specification customization.

This article documents various scenarios for AWS resources reuse.

2. Existing resources scenarios

2.1. Existing IAM Roles and Instance Profiles

NB! This feature is supported in Kublr starting with version 1.10.2

By default Kublr creates two IAM Roles and corresponding IAM Instance Profiles - the first pair is used for master nodes, and the second one is used for all node instances.

It is possible to reuse existing IAM Roles and instance profiles via Kublr cluster specification as shown in the following example:

metadata:
  name: cluster-1527283049
spec:
  locations:
    - name: aws1
      aws:
        ...
        iamRoleMasterPathName: '/kublr/master-role-A5FF3GD'
        iamInstanceProfileMasterPathName: null
        iamRoleNodePathName: 'node-role'
        iamInstanceProfileNodePathName: null
        ...

  master: ...

  nodes: ...

When AWS location properties iamRoleMasterPathName, iamInstanceProfileMasterPathName, iamRoleNodePathName, and/or iamInstanceProfileNodePathName are defined, corresponding IAM objects are assumed to exist and therefore are not created by Kublr.

Existing roles should have the following permissions as described in the following sections.

2.1.1. Master role permissions

The following policy example shows permissions that master instances may need:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::<S3 secret bucket ARN>",
                "arn:aws:s3:::<S3 secret bucket ARN>/*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "cloudformation:DescribeStackResources",
                "cloudformation:DescribeStacks",

                "ec2:*",

                "elasticloadbalancing:*",

                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:BatchGetImage",

                "autoscaling:DescribeTags",
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
  • S3 permissions are required for masters to manage and access cluster secrets

  • Cloudformation and EC2 permissions are required for masters to be able to work with master EBS volumes, associated EIP, manage EBS volumes, and manage own instance properties.

    These permissions are mandatory.

  • ELB permissions are necessary for master nodes to be able to manage dynamic AWS ELB for Kubernetes Services of load balancer type.

    These permissions are not required if load balancer Kubernetes Services are not used.

  • ECR permissions may be needed if docker images from ECR need to be run on the cluster.

    These permissions are not required if ECR is not used.

  • Autoscalig permissions may be needed if Kubernetes cluster autoscaler is used.

    These permissions are not required if node autoscaling is not used.

Permissions may be further restricted by specifying resource restrictions and conditions.

2.1.2. Node role permissions

The following policy example shows permissions that node instances may need:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:*",
            "Resource": [
                "<S3 secret bucket ARN>",
                "<S3 secret bucket ARN/*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "cloudformation:DescribeStackResources",
                "cloudformation:DescribeStacks",

                "ec2:Describe*",
                "ec2:CreateSnapshot",
                "ec2:CreateTags",
                "ec2:DeleteSnapshot",
                "ec2:AttachVolume",
                "ec2:DetachVolume",
                "ec2:AssociateAddress",
                "ec2:DisassociateAddress",
                "ec2:ModifyInstanceAttribute",

                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:BatchGetImage",

                "autoscaling:DescribeTags",
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
  • S3 permissions are required for nodes to access cluster secret distributed by masters.

  • Cloudformation and EC2 permissions are required for nodes to be able to work with EBS volumes, associated EIP,and manage own instance properties.

    These permissions are mandatory.

  • ECR permissions may be needed if docker images from ECR need to be run on the cluster.

    These permissions are not required if ECR is not used.

  • Autoscalig permissions may be needed if Kubernetes cluster autoscaler is used.

    These permissions are not required if node autoscaling is not used.

Permissions may be further restricted by specifying resource restrictions and conditions.

2.2. Existing VPC

By default Kublr creates a new VPC for a new Kubernetes cluster.

Reusing an existing VPC is possible via Kublr cluster specification as shown in the following example:

metadata:
  name: cluster-1527283049
spec:
  locations:
    - name: aws1
      aws:
        ...
        vpcId: 'vpc-12345'
        vpcCidrBlock: '172.16.0.0/16'
        cidrBlocks:
          ...
  ...

When AWS location property vpcId is defined, Kublr will not create a new VPC, amd will use the specified existing VPC instead.

For existing VPC is it also necessary to specify associated with this VPC IP CIDR block using property vpcCidrBlock.

It may also be necessary to provide CIDR block information for subnets if provided VPC CIDR block is small and subnets will be generated by Kublr. This information is specified in cidrBlocks property.

Additionational information:

2.3. Existing Subnets

NB! This feature is supported in Kublr starting with version 1.10.2

Default Kublr behavior regarding VPC subnets is as follows:

  • New VPC subnets are created for a new Kubernetes cluster.

  • One “public” subnet (with public address assignment) is created for each AZ with public node instance groups.

  • One “private” subnet (without public address assignment) is created for each AZ with private node instance groups.

  • Separate public or private subnets are created for AZs used by master instance groups.

  • Node public subnets are reused by public node instance groups using the same AZs.

  • Node private subnets are reused by private node instance groups using the same AZs.

  • Master public subnets are reused by public masters located in the same AZs.

  • Master private subnets are reused by private masters located in the same AZs.

Reusing existing Subnets is possible via Kublr cluster specification.

It is also possible to combine existing subnets and subnets created by Kublr.

The following example shows a cluster that uses an existing VPC and a number of existing subnets, and also creates new subnets.

metadata:
  name: cluster-1527283049
spec:
  locations:
    - name: aws1
      aws:
        region: us-east-1
        # Created cluster will be limited to the specified zones.
        #
        # If no zones are specified, Kublr will try to identify and use all zones
        # available in the given region.
        #
        # This zone list may be overridden (although only reduced, not extended)
        # by each specific instance group
        #
        availabilityZones:
          - us-east-1a
          - us-east-1c
          - us-east-1d
          - us-east-1f

        # Existing VPC will be used
        #
        # It is not possible to use existing subnets without specifying
        # corresponding VPC Id
        #
        vpcId: vpc-12345

        # VPC CIDR must be specified for existing VPC
        #
        vpcCidrBlock: '172.16.0.0/16'

        # New subnet CIDRs may be specified if new subnets need to be created
        #
        #cidrBlocks:
        #  ...

  master:
    # This cluster will contain 3 master instances
    #
    minNodes: 3
    locations:
      - locationRef: aws1
        aws:
          # This means that this instance group is "public" (instances will be
          # assigned public IP addresses)
          #
          nodeIpAllocationPolicy: privateAndPublic

          # master instances will be placed into us-east-1 AZs 'f', 'a', and 'c'
          # in the specified order.
          #
          # In this case
          # - master #1 will be placed in AZ 'us-east-1f'
          # - master #2 will be placed in AZ 'us-east-1a'
          # - master #3 will be placed in AZ 'us-east-1c'
          #
          availabilityZones: ['us-east-1f', 'us-east-1a', 'us-east-1c']

          # Existing subnets for master instance groups.
          #
          # Elements in this array must correspond to the elements in the 'availabilityZones'
          # array.
          #
          # Empty or null elements in this array designate new subnets that should
          # be created by Kublr. Non-empty elements designate existing subnets that
          # should be used for corresponding master.
          #
          # In this case
          # - master #1 will be assigned to an existing subnet with id 'subnet-123',
          #   which is assumed to be in AZ 'us-east-1f'.
          # - master #2 will be placed in a new public subnet created in AZ 'us-east-1a'
          # - master #3 will be assigned to an existing subnet with id 'subnet-456',
          #   which is assumed to be in AZ 'us-east-1c'.
          #
          subnetIds: ['subnet-123', '', 'subnet-456']
  nodes:
    - name: default

      # number of instances in this instance group may vary between 3 and 10
      #
      minNodes: 3
      maxNodes: 10

      locations:
        - aws:
            # This means that this instance group is "private" (instances will NOT be
            # assigned public IP addresses)
            #
            nodeIpAllocationPolicy: private

            # node instances in this group will be distributed between 'us-east-1'
            # availability zones 'us-east-1d' and 'us-east-1f'
            #
            availabilityZones: ['us-east-1d', 'us-east-1f']

            # because 'subnetIds' property is omitted for this instance group,
            # Kublr will create new subnets for it.
            # In this specific case, two "private" subnets will be created, one
            # for each of the availability zones 'us-east-1d' and 'us-east-1f'
            #
            # subnetIds: null

The following cluster specification snippet shows a cluster for which no VPC or subnets are created, only existing ones are used:

metadata:
  name: cluster-1527283049
spec:
  locations:
    - name: aws1
      aws:
        region: us-east-1
        availabilityZones: ['us-east-1a', 'us-east-1c']
        vpcId: vpc-12345
        vpcCidrBlock: '172.16.0.0/16'
  master:
    minNodes: 1
    locations:
      - locationRef: aws1
        aws:
          availabilityZones: ['us-east-1a']
          subnetIds: ['subnet-123']
  nodes:
    - name: default
      minNodes: 3
      maxNodes: 10
      locations:
        - locationRef: aws1
          aws:
            # 'availabilityZones' here can be omitted because it exactly same as
            # in the corresponding location
            availabilityZones: ['us-east-1a', 'us-east-1c']
            subnetIds: ['subnet-123', 'subnet-456']

2.3.1. Existing Subnets and VPC constraints and requirements

When existing VPC and/or subnets are used, the following conditions must be met:

  • each existing subnet must be a subnet of the specified VPC

  • existing subnets’ CIDRs must be sub-CIDRs of the specified VPC and must not conflict with each other and CIDRs specified in cidrBlocks property for new subnets created by Kublr

  • network routes in the existing VPC and subnets must be configured so that network connectivity is available between all cluster instances

  • each existing VPC subnets’ “Auto-assign public IPv4 address” property value must correspond to the value of property nodeIpAllocationPolicy of each node instance group that is assigned to this subnet;

    This in particular means that subnets with different value of “Auto-assign public IPv4 address” property cannot be used for the same instance group;

    and vice versa the same existing subnet cannot be shared by instance groups with different values of nodeIpAllocationPolicy

2.4. Existing Security Groups

NB! This feature is supported in Kublr starting with version 1.10.2

Default Kublr behavior regarding security groups is as follows:

  • If an existing VPC is used, Kublr creates a new security group for this VPC that is assigned to all instances in the cluster.

    This security group allows all traffic between any instances in the given VPC

  • If a new VPC is created, this VPC’s default security group is configured to allow all traffic between any instances in the given VPC

  • One “master” security group is created and assigned to all master instances in the cluster.

    This security group allows incoming TCP/443 and TCP/22 connections.

  • One “node” security group is created and assigned to all node instances in the cluster.

    This security group allows incoming TCP/22 connections.

  • If Kublr creates master ELB, public and/or private (according to masterElbAllocationPolicy property value), a separate security group is created for each of these ELBs.

Reusing existing Security Groups is possible via Kublr cluster specification.

It is also possible to combine existing Security Groups and ones created by Kublr.

The following example shows a cluster that uses an existing VPC and a number of existing security groups.

Example (click to open): Existing VPC and security groups cluster spec

metadata:
  name: cluster-1527283049
spec:
  locations:
    - name: aws1
      aws:
        region: us-east-1

        # Existing VPC will be used
        #
        # It is not possible to use existing security groups without specifying
        # corresponding existing VPC, to which these security groups belong
        #
        vpcId: vpc-12345

        # VPC CIDR must be specified for existing VPC
        #
        vpcCidrBlock: '172.16.0.0/16'

        # Skip creating standard Kublr security groups
        #
        skipSecurityGroupDefault: true
        skipSecurityGroupMaster: true
        skipSecurityGroupNode: true

        # The following existing security groups will be assigned to all cluster
        # instances
        existingSecurityGroupIds: ['sg-123']

  master:
    locations:
      - locationRef: aws1
        aws:
          # The following existing security groups will be assigned to this group
          # instances in addition to security groups listed in the location object
          # above
          existingSecurityGroupIds: ['sg-456']
  nodes:
    - name: default
      ...
      locations:
        - aws:
            # The following existing security groups will be assigned to this group
            # instances in addition to security groups listed in the location object
            # above
            existingSecurityGroupIds: ['sg-789']

2.4.1. Existing Security Groups constraints and requirements

When existing Security Groups are used, the following conditions must be met:

  • each existing Security Group must be a Security Group of the specified VPC

  • final combination of security groups, existing and new, assigned to the instances must allow any and all traffic between all cluster instances

3. Minimizing Kublr AWS permissions

Configuring Kublr AWS Kubernetes cluster to use existing VPC, subnets, security group, and IAM roles and instance profiles allows minimizing AWS permissions that Kublr needs to create a cluster.

Minimal set of actions in AWS policy (click to open)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:GetUser",
                "iam:GetRole",
                "iam:PassRole",

                "cloudformation:CreateUploadBucket",
                "cloudformation:ListStacks",
                "cloudformation:CancelUpdateStack",
                "cloudformation:DescribeStackResources",
                "cloudformation:SignalResource",
                "cloudformation:UpdateTerminationProtection",
                "cloudformation:DescribeStackResource",
                "cloudformation:CreateChangeSet",
                "cloudformation:DeleteChangeSet",
                "cloudformation:GetTemplateSummary",
                "cloudformation:DescribeStacks",
                "cloudformation:ContinueUpdateRollback",
                "cloudformation:GetStackPolicy",
                "cloudformation:DescribeStackEvents",
                "cloudformation:CreateStack",
                "cloudformation:GetTemplate",
                "cloudformation:DeleteStack",
                "cloudformation:UpdateStack",
                "cloudformation:DescribeChangeSet",
                "cloudformation:ExecuteChangeSet",
                "cloudformation:ValidateTemplate",
                "cloudformation:ListChangeSets",
                "cloudformation:ListStackResources",

                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribePolicies",
                "autoscaling:DescribeTags",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeLoadBalancerTargetGroups",
                "autoscaling:DescribeLoadBalancers",
                "autoscaling:DescribeScalingActivities",
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:AttachLoadBalancerTargetGroups",
                "autoscaling:CreateOrUpdateTags",
                "autoscaling:DeleteAutoScalingGroup",
                "autoscaling:CreateAutoScalingGroup",
                "autoscaling:AttachLoadBalancers",
                "autoscaling:CreateLaunchConfiguration",
                "autoscaling:DeleteLaunchConfiguration",

                "ec2:AllocateAddress",
                "ec2:AssociateIamInstanceProfile",
                "ec2:AttachNetworkInterface",
                "ec2:AttachVolume",
                "ec2:CreateLaunchTemplate",
                "ec2:CreateLaunchTemplateVersion",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:DeleteLaunchTemplate",
                "ec2:DeleteLaunchTemplateVersions",
                "ec2:DeleteTags",
                "ec2:DeleteVolume",
                "ec2:DescribeVolumes",
                "ec2:DetachVolume",
                "ec2:DisassociateAddress",
                "ec2:DisassociateIamInstanceProfile",
                "ec2:ModifyInstanceAttribute",
                "ec2:ModifyLaunchTemplate",
                "ec2:ModifyNetworkInterfaceAttribute",
                "ec2:ModifyVolumeAttribute",
                "ec2:RebootInstances",
                "ec2:ReleaseAddress",
                "ec2:ReplaceIamInstanceProfileAssociation",
                "ec2:RunInstances",
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances",

                "tag:GetResources",
                "tag:GetTagValues",
                "tag:GetTagKeys",

                "s3:ListAllMyBuckets",
                "s3:PutBucketPolicy",
                "s3:CreateBucket",
                "s3:DeleteBucket",
                "s3:GetObject",

                "logs:DescribeLogGroups",
                "logs:CreateLogGroup",
                "logs:PutRetentionPolicy",
                "logs:DeleteLogGroup"
            ],
            "Resource": "*"
        }
    ]
}

4. Existing AWS resources example

The following Cloud Formation stack template provides an example of a set of AWS resources that may be used as existing VPC, subnets, IAM roles and instance profiles, and security groups for a Kublr cluster.

Existing AWS resources for Kublr (click to open)

AWSTemplateFormatVersion: '2010-09-09'
Description: Cloud formation template for VPC and IAM objects to host Kublr cluster
Conditions:
  RegionUSEast1:
    !Equals
      - !Ref 'AWS::Region'
      - us-east-1

Resources:

  NewVpc:
    Type: 'AWS::EC2::VPC'
    Properties:
      CidrBlock: '172.16.0.0/16'
      EnableDnsHostnames: true
      EnableDnsSupport: true
      InstanceTenancy: default
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-vpc'

  VpcDhcpOptions:
    Type: 'AWS::EC2::DHCPOptions'
    Properties:
      DomainName:
        !Join
          - ' '
          - - !If
                - RegionUSEast1
                - ec2.internal
                - !Sub '${AWS::Region}.compute.internal'
             #- default.svc.cluster.local
             #- svc.cluster.local
      DomainNameServers:
        - !Sub '100.64.0.10,AmazonProvidedDNS'
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-dhcp-options'
  VpcDhcpOptionsAssociation:
    Type: 'AWS::EC2::VPCDHCPOptionsAssociation'
    Properties:
      VpcId: !Ref NewVpc
      DhcpOptionsId: !Ref VpcDhcpOptions

  Gw:
    Type: 'AWS::EC2::InternetGateway'
    Properties:
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-igw'

  VpcGwAttachment:
    Type: 'AWS::EC2::VPCGatewayAttachment'
    Properties:
      InternetGatewayId: !Ref Gw
      VpcId: !Ref NewVpc

  RouteTablePublic:
    Type: 'AWS::EC2::RouteTable'
    Properties:
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-rt'
      VpcId: !Ref NewVpc

  RouteToInternet:
    Type: 'AWS::EC2::Route'
    Properties:
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref Gw
      RouteTableId: !Ref RouteTablePublic
    DependsOn: VpcGwAttachment

  SecretExchangeBucketVpcEndpoint:
    Type: 'AWS::EC2::VPCEndpoint'
    Properties:
      RouteTableIds:
        - !Ref RouteTablePublic
      ServiceName: !Sub 'com.amazonaws.${AWS::Region}.s3'
      VpcId: !Ref NewVpc

  # SecurityGroup for masters
  SgMaster:
    Type: 'AWS::EC2::SecurityGroup'
    Properties:
      GroupDescription: !Ref 'AWS::StackName'
      SecurityGroupEgress:
        - CidrIp: 0.0.0.0/0
          FromPort: 0
          IpProtocol: -1
          ToPort: 65535
      SecurityGroupIngress:
        - CidrIp: 0.0.0.0/0
          FromPort: 22
          IpProtocol: tcp
          ToPort: 22
        - CidrIp: 0.0.0.0/0
          FromPort: 443
          IpProtocol: tcp
          ToPort: 443
        - CidrIp: 0.0.0.0/0
          FromPort: 30000
          IpProtocol: tcp
          ToPort: 32767
        - CidrIp: 0.0.0.0/0
          FromPort: 30000
          IpProtocol: udp
          ToPort: 32767
        - CidrIp: 172.16.0.0/16
          FromPort: 0
          IpProtocol: -1
          ToPort: 65535
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-sg-master'
      VpcId: !Ref NewVpc

  # SecurityGroup for nodes
  SgNode:
    Type: 'AWS::EC2::SecurityGroup'
    Properties:
      GroupDescription: !Ref 'AWS::StackName'
      SecurityGroupEgress:
        - CidrIp: 0.0.0.0/0
          FromPort: 0
          IpProtocol: -1
          ToPort: 65535
      SecurityGroupIngress:
        - CidrIp: 0.0.0.0/0
          FromPort: 22
          IpProtocol: tcp
          ToPort: 22
        - CidrIp: 0.0.0.0/0
          FromPort: 30000
          IpProtocol: tcp
          ToPort: 32767
        - CidrIp: 0.0.0.0/0
          FromPort: 30000
          IpProtocol: udp
          ToPort: 32767
        - CidrIp: 172.16.0.0/16
          FromPort: 0
          IpProtocol: -1
          ToPort: 65535
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-sg-node'
      VpcId: !Ref NewVpc

  SubnetMasterPublic0:
    Type: 'AWS::EC2::Subnet'
    Properties:
      AvailabilityZone: 'us-east-1a'
      CidrBlock: '172.16.2.0/23'
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-subnet-master-public-0'
      VpcId: !Ref NewVpc
  RtAssocSubnetMasterPublic0:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      RouteTableId: !Ref RouteTablePublic
      SubnetId: !Ref SubnetMasterPublic0
  SubnetMasterPublic1:
    Type: 'AWS::EC2::Subnet'
    Properties:
      AvailabilityZone: 'us-east-1b'
      CidrBlock: '172.16.4.0/23'
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-subnet-master-public-1'
      VpcId: !Ref NewVpc
  RtAssocSubnetMasterPublic1:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      RouteTableId: !Ref RouteTablePublic
      SubnetId: !Ref SubnetMasterPublic1
  SubnetMasterPublic2:
    Type: 'AWS::EC2::Subnet'
    Properties:
      AvailabilityZone: 'us-east-1c'
      CidrBlock: '172.16.6.0/23'
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-subnet-master-public-2'
      VpcId: !Ref NewVpc
  RtAssocSubnetMasterPublic2:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      RouteTableId: !Ref RouteTablePublic
      SubnetId: !Ref SubnetMasterPublic2
  SubnetNodePublic0:
    Type: 'AWS::EC2::Subnet'
    Properties:
      AvailabilityZone: 'us-east-1a'
      CidrBlock: '172.16.16.0/20'
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-subnet-node-public-0'
      VpcId: !Ref NewVpc
  RtAssocSubnetNodePublic0:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      RouteTableId: !Ref RouteTablePublic
      SubnetId: !Ref SubnetNodePublic0
  SubnetNodePublic1:
    Type: 'AWS::EC2::Subnet'
    Properties:
      AvailabilityZone: 'us-east-1b'
      CidrBlock: '172.16.32.0/20'
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-subnet-node-public-1'
      VpcId: !Ref NewVpc
  RtAssocSubnetNodePublic1:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      RouteTableId: !Ref RouteTablePublic
      SubnetId: !Ref SubnetNodePublic1
  SubnetNodePublic2:
    Type: 'AWS::EC2::Subnet'
    Properties:
      AvailabilityZone: 'us-east-1c'
      CidrBlock: '172.16.48.0/20'
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub '${AWS::StackName}-subnet-node-public-2'
      VpcId: !Ref NewVpc
  RtAssocSubnetNodePublic2:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      RouteTableId: !Ref RouteTablePublic
      SubnetId: !Ref SubnetNodePublic2

  # Role and Profile for masters
  RoleMaster:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Action:
              - 'sts:AssumeRole'
            Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
        Version: '2012-10-17'
      Path: /
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action: 's3:*'
                Resource:
                  - !Sub 'arn:${AWS::Partition}:s3:::oc-test-existing-vpc'
                  - !Sub 'arn:${AWS::Partition}:s3:::oc-test-existing-vpc/*'
              - Effect: Allow
                Action:
                  - 'ec2:*'
                  - 'elasticloadbalancing:*'
                  - 'ecr:GetAuthorizationToken'
                  - 'ecr:BatchCheckLayerAvailability'
                  - 'ecr:GetDownloadUrlForLayer'
                  - 'ecr:GetRepositoryPolicy'
                  - 'ecr:DescribeRepositories'
                  - 'ecr:ListImages'
                  - 'ecr:BatchGetImage'
                  - 'autoscaling:DescribeTags'
                  - 'autoscaling:DescribeAutoScalingGroups'
                  - 'autoscaling:DescribeAutoScalingInstances'
                  - 'autoscaling:SetDesiredCapacity'
                  - 'autoscaling:TerminateInstanceInAutoScalingGroup'
                  - 'rds:DescribeDBInstances'
                  - 'cloudformation:DescribeStackResources'
                  - 'cloudformation:DescribeStacks'
                Resource: '*'
  ProfileMaster:
    Type: 'AWS::IAM::InstanceProfile'
    Properties:
      Path: /
      Roles:
        - !Ref RoleMaster
    DependsOn:
      - RoleMaster

  # Role and Profile for nodes
  RoleNode:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Action:
              - 'sts:AssumeRole'
            Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
        Version: '2012-10-17'
      Path: /
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action: 's3:*'
                Resource:
                  - !Sub 'arn:${AWS::Partition}:s3:::oc-test-existing-vpc'
                  - !Sub 'arn:${AWS::Partition}:s3:::oc-test-existing-vpc/*'
              - Effect: Allow
                Action:
                  - 'ec2:Describe*'
                  - 'ec2:CreateSnapshot'
                  - 'ec2:CreateTags'
                  - 'ec2:DeleteSnapshot'
                  - 'ec2:AttachVolume'
                  - 'ec2:DetachVolume'
                  - 'ec2:AssociateAddress'
                  - 'ec2:DisassociateAddress'
                  - 'ec2:ModifyInstanceAttribute'
                  - 'ecr:GetAuthorizationToken'
                  - 'ecr:BatchCheckLayerAvailability'
                  - 'ecr:GetDownloadUrlForLayer'
                  - 'ecr:GetRepositoryPolicy'
                  - 'ecr:DescribeRepositories'
                  - 'ecr:ListImages'
                  - 'ecr:BatchGetImage'
                  - 'autoscaling:DescribeTags'
                  - 'autoscaling:DescribeAutoScalingGroups'
                  - 'autoscaling:DescribeAutoScalingInstances'
                  - 'autoscaling:SetDesiredCapacity'
                  - 'autoscaling:TerminateInstanceInAutoScalingGroup'
                  - 'rds:DescribeDBInstances'
                  - 'cloudformation:DescribeStackResources'
                  - 'cloudformation:DescribeStacks'
                Resource: '*'
  ProfileNode:
    Type: 'AWS::IAM::InstanceProfile'
    Properties:
      Path: /
      Roles:
        - !Ref RoleNode
    DependsOn:
      - RoleNode

Outputs:
  Vpc:
    Value: !Ref NewVpc
  SecurityGroupVpcDefault:
    Value: !GetAtt [NewVpc, DefaultSecurityGroup]
  SecurityGroupMaster:
    Value: !Ref SgMaster
  SecurityGroupNode:
    Value: !Ref SgNode
  SubnetMasterPublic0:
    Value: !Ref SubnetMasterPublic0
  SubnetMasterPublic1:
    Value: !Ref SubnetMasterPublic1
  SubnetMasterPublic2:
    Value: !Ref SubnetMasterPublic2
  SubnetNodePublic0:
    Value: !Ref SubnetNodePublic0
  SubnetNodePublic1:
    Value: !Ref SubnetNodePublic1
  SubnetNodePublic2:
    Value: !Ref SubnetNodePublic2
  RoleMaster:
    Value: !Ref RoleMaster
  ProfileMaster:
    Value: !Ref ProfileMaster
  RoleNode:
    Value: !Ref RoleNode
  ProfileNode:
    Value: !Ref ProfileNode

After creating this stack, use values from the stack Outputs section to for Kublr cluster spec as follows:

Kublr cluster spec example (click to open)

kind: Cluster
metadata:
  name: 'test-existing-vpc'
spec:
  locations:
    - name: aws1
      aws:
        region: 'us-east-1'
        availabilityZones: ['us-east-1a', 'us-east-1b', 'us-east-1c']

        awsApiAccessSecretRef: 'awsSecret1'

        vpcId: 'vpc-0eee0c9060e2660ea'

        iamRoleMasterPathName: 'vpc-for-k8s-RoleMaster-MZC4DMMUNCHJ'
        iamInstanceProfileMasterPathName: 'vpc-for-k8s-ProfileMaster-MDLBDVSP02F4'
        iamRoleNodePathName: 'vpc-for-k8s-RoleNode-1CXJ6S4IXE54U'
        iamInstanceProfileNodePathName: 'vpc-for-k8s-ProfileNode-PRDVSKUXWC76'

        skipSecurityGroupDefault: true
        skipSecurityGroupMaster: true
        skipSecurityGroupNode: true

        existingSecurityGroupIds: ['sg-070f59f1a29b21f25']

  secretStore:
    awsS3:
      locationRef: aws1
      s3BucketName: test-existing-vpc

  master:
    maxNodes: 3
    locations:
      - locationRef: aws1
        aws:
          instanceType: t2.medium
          overrideImageId: ami-7031aa0f

          masterElbAllocationPolicy: 'none'
          existingSecurityGroupIds: ['sg-0a5804b9c7077a0a1']
          subnetIds: ['subnet-04c2ec88c6a16cf7d', 'subnet-0022e8fe7703a848f', 'subnet-09fac0c4d07fa7e72']
  nodes:
    - name: s1
      minNodes: 1
      locations:
        - locationRef: aws1
          aws:
            instanceType: t2.medium
            overrideImageId: ami-7031aa0f

            existingSecurityGroupIds: ['sg-0bb3145c20df016ee']
            subnetIds: ['subnet-0b05aebd6ba0a99c4', 'subnet-07d404d717b42995e', 'subnet-0c02f03ede5079b31']
    - name: s2
      minNodes: 1
      maxNodes: 10
      locations:
        - locationRef: aws1
          aws:
            sshKey: cloudify
            instanceType: t2.medium
            overrideImageId: ami-7031aa0f

            existingSecurityGroupIds: ['sg-0bb3145c20df016ee']
            subnetIds: ['subnet-0b05aebd6ba0a99c4', 'subnet-07d404d717b42995e', 'subnet-0c02f03ede5079b31']


Questions? Suggestions? Need help? Contact us.