Skip to content

Instantly share code, notes, and snippets.

@hhuuggoo
Created April 7, 2019 16:24
Show Gist options
  • Save hhuuggoo/a004a0de0931d0c4f9cc18dd04bece25 to your computer and use it in GitHub Desktop.
Save hhuuggoo/a004a0de0931d0c4f9cc18dd04bece25 to your computer and use it in GitHub Desktop.
How Hugo sets up AWS kubernetes clusters

Loosely based off of: https://zero-to-jupyterhub.readthedocs.io/en/latest/amazon/step-zero-aws.html

AWS setup

Create an IAM role with these permissions:

AmazonEC2FullAccess IAMFullAccess AmazonS3FullAccess AmazonVPCFullAccess Route53FullAccess (Optional)

spinup an ec2 instance (can be small, t3.micro, for example)  to use as the server you use to create your kubernetes cluster.  Assing that IAM role to this instance

Create an s3 bucket that kops will use to store cluster configuration

set some environment variables

(of course, choose different region/zones if you want)

export KOPS_STATE_STORE=s3://my-cluster-data # this the bucket that you just created export NAME=CLUSTER_NAME export REGION=us-east-2 export ZONES=us-east-2c

install the following pieces of software

  • kops
  • awscli
  • kubectl

for kops: https://github.com/kubernetes/kops/blob/master/docs/install.md

for awscli: sudo apt install python-pip sudo pip install awscli

for kubectl: curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl

Create the cluster

kops create cluster $NAME \
  --zones $ZONES \
  --authorization RBAC \
  --master-size t3.small \
  --master-volume-size 30 \
  --node-size t3.small \
  --node-volume-size 10 \
  --node-count 1 \
  --topology private \
  --networking weave \
  --yes \

RBAC is pretty important - if you don't do it your cluster is pretty insecure.  I'm not sure how important private topology is, but it's a good idea.  a private topology runs all your stuff inside a VPC, which protects you from outside traffic.  Weave is a private traffic kubernetes plugin (there are other options, and I think GCP uses calico)

Apply the following overrides to the spec section of your cluster configuration using kops edit cluster $NAME

kubeAPIServer:
  runtimeConfig:
      scheduling.k8s.io/v1alpha1: "true"
      admissionregistration.k8s.io/v1beta1: "true"
      autoscaling/v2beta1: "true"
  admissionControl:
  - Priority
  featureGates:
    PodPriority: "true"
kubelet:
  featureGates:
     PodPriority: "true"
kubeScheduler:
  featureGates:
    PodPriority: "true"
kubeControllerManager:
  horizontalPodAutoscalerUseRestClients: true
  featureGates:
     PodPriority: "true"

These do 2 things

  • enable autoscaling
  • enable pod prioirty - I use this so I can overprovision nodes so that startup times are faster, but you probably don't need it

Create a storage class for your cluster

into storageclass.yaml:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

execute:

kubectl apply -f storageclass.yml

Install Helm

kubectl create serviceaccount tiller --namespace=kube-system
kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin
sudo snap install helm --classic
helm init --service-account=tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Create an intance/autoscaling group

shove the following into a nodes.yaml:

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: REPLACE_THIS_WITH_CLUSTER_NAME
  name: t3.medium
spec:
  image: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17
  machineType: t3.medium
  maxSize: 10
  minSize: 0
  nodeLabels:
    kops.k8s.io/instancegroup: t3.medium
    saturn/ig: t3.medium
  role: Node
  rootVolumeSize: 40
  subnets:
  - us-east-2c
  cloudLabels:
    k8s.io/cluster-autoscaler/enabled: ""
    k8s.io/cluster-autoscaler/node-template/label: ""
    kubernetes.io/cluster/saturn-prod.k8s.local: owned
  additionalPolicies:
    node: |
      [
        {
          "Effect": "Allow",
          "Action": [
            "autoscaling:DescribeAutoScalingGroups",
            "autoscaling:DescribeAutoScalingInstances",
            "autoscaling:SetDesiredCapacity",
            "autoscaling:DescribeLaunchConfigurations",
            "autoscaling:DescribeTags",
            "autoscaling:TerminateInstanceInAutoScalingGroup"
          ],
          "Resource": ["*"]
        }
      ]

and then execute kops create ig -f nodes.yaml

Setup the autoscaler

helm install --name  autoscaler \
    --namespace kube-system \
    --set autoDiscovery.clusterName=${NAME} \
    --set extraArgs.balance-similar-node-groups=false \
    --set extraArgs.expander=least-waste \
    --set rbac.create=true \
    --set rbac.pspEnabled=true \
    --set awsRegion=${REGION} \
    --set awsZones=${ZONES} \
    --set nodeSelector."node-role\.kubernetes\.io/master"="" \
    --set tolerations[0].effect=NoSchedule \
    --set tolerations[0].key=node-role.kubernetes.io/master \
    --set cloudProvider=aws \
    stable/cluster-autoscaler

install kube2iam

If you are the only one using this cluster, than this isn't critical. BUT if you don't do this, every container will have access to the IAM credentials of the host.

helm install stable/kube2iam --name kube2iam --namespace=kube-system --set host.iptables=true --set rbac.create=true  --set host.interface=weave
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment