Skip to content

Instantly share code, notes, and snippets.

@detiber
Last active January 22, 2019 18:20
Show Gist options
  • Save detiber/4ec73772f0618a40faad07c1abd06ab4 to your computer and use it in GitHub Desktop.
Save detiber/4ec73772f0618a40faad07c1abd06ab4 to your computer and use it in GitHub Desktop.
Cluster API Provider AWS phases with KIND

Cluster API Provider AWS Phases with KIND

Pre-demo Prep

Install KIND

go get -u sigs.k8s.io/kind

Build and install the artifacts

make clean
make clusterctl clusterawsadm docker-build-dev docker-push-dev
install bazel-bin/cmd/clusterctl/linux_amd64_pure_stripped/clusterctl ~/go/bin/
install bazel-bin/cmd/clusterawsadm/linux_amd64_pure_stripped/clusterawsadm ~/go/bin

Prep the AWS environment and local env vars

export AWS_REGION=us-east-1
export SSH_KEY_NAME=default # pre-existing ssh key
clusterawsadm alpha bootstrap create-stack
export AWS_CREDENTIALS=$(aws iam create-access-key \
  --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io)
export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)
export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)

Build the manifests

make manifests-dev

Stand up the KIND cluster and retrieve the kubeconfig

kind create cluster --name capa
cp $(kind get kubeconfig-path --name=capa) kind.kubeconfig

Prep the CAPA components

clusterctl alpha phases apply-cluster-api-components -p ./cmd/clusterctl/examples/aws/out/provider-components.yaml --kubeconfig kind.kubeconfig

Demo steps

Deploy the cluster shared components

cat <<EOF | kubectl apply --kubeconfig kind.kubeconfig -f -
apiVersion: "cluster.k8s.io/v1alpha1"
kind: Cluster
metadata:
  name: test1
spec:
  clusterNetwork:
    services:
      cidrBlocks: ["10.96.0.0/12"]
    pods:
      cidrBlocks: ["192.168.0.0/16"]
    serviceDomain: "cluster.local"
  providerSpec:
    value:
      apiVersion: "awsprovider/v1alpha1"
      kind: "AWSClusterProviderSpec"
      region: "us-east-1"
      sshKeyName: "default"
EOF

Deploy the controlplane instance

cat <<EOF | kubectl create --kubeconfig kind.kubeconfig -f -
apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
  name: aws-controlplane-0
  labels:
    set: controlplane
spec:
  versions:
    kubelet: v1.13.0
    controlPlane: v1.13.0
  providerSpec:
    value:
      apiVersion: awsprovider/v1alpha1
      kind: AWSMachineProviderSpec
      instanceType: "t2.medium"
      iamInstanceProfile: "control-plane.cluster-api-provider-aws.sigs.k8s.io"
      keyName: "default"
EOF

Retrieve the kubeconfig from the controlplane:

BASTION_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=test1-bastion" "Name=instance-state-name,Values=running" --query "Reservations[0].Instances[0].PublicIpAddress" --output text)
CONTROLPLANE_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=aws-controlplane-0" "Name=instance-state-name,Values=running" --query "Reservations[0].Instances[0].PrivateIpAddress" --output text)
ssh -A -J ubuntu@${BASTION_IP}:22 ubuntu@${CONTROLPLANE_IP} 'sudo cp /etc/kubernetes/admin.conf ~/kubeconfig && sudo chown ubuntu ~/kubeconfig'
scp -o "ProxyJump ubuntu@${BASTION_IP}" ubuntu@${CONTROLPLANE_IP}:kubeconfig kubeconfig

Apply the addons

clusterctl alpha phases apply-addons -a ./cmd/clusterctl/examples/aws/out/addons.yaml --kubeconfig=kubeconfig

Deploy the worker node

cat <<EOF | kubectl create --kubeconfig kind.kubeconfig -f -
apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
  generateName: aws-node-
  labels:
    set: node
spec:
  versions:
    kubelet: v1.13.0
  providerSpec:
    value:
      apiVersion: awsprovider/v1alpha1
      kind: AWSMachineProviderSpec
      instanceType: "t2.medium"
      iamInstanceProfile: "nodes.cluster-api-provider-aws.sigs.k8s.io"
      keyName: "default"
EOF
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment