Skip to content

Instantly share code, notes, and snippets.

@ndeloof
Created September 11, 2019 08:01
Show Gist options
  • Save ndeloof/082b034028cedf57fa8b61565d3823e2 to your computer and use it in GitHub Desktop.
Save ndeloof/082b034028cedf57fa8b61565d3823e2 to your computer and use it in GitHub Desktop.
compose-on-eks

Following docs on https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

Create EKS cluster

~ eksctl create cluster --name kikito --version 1.14 --nodegroup-name standard-workers --node-type m5.large --nodes 2 --nodes-min 1 --nodes-max 4 --node-ami auto --ssh-access
[ℹ]  using region eu-west-3
[ℹ]  setting availability zones to [eu-west-3a eu-west-3b eu-west-3c]
[ℹ]  subnets for eu-west-3a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for eu-west-3b - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for eu-west-3c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "standard-workers" will use "ami-04fada31d8c50b7a8" [AmazonLinux2/1.14]
[ℹ]  using SSH public key "/home/nicolas/.ssh/id_rsa.pub" as "eksctl-kikito-nodegroup-standard-workers-09:c5:b6:a6:e4:66:26:50:79:39:0e:ed:7e:7b:de:37" 
[ℹ]  using Kubernetes version 1.14
[ℹ]  creating EKS cluster "kikito" in "eu-west-3" region
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-3 --name=kikito'
[ℹ]  CloudWatch logging will not be enabled for cluster "kikito" in "eu-west-3"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-3 --name=kikito'
[ℹ]  2 sequential tasks: { create cluster control plane "kikito", create nodegroup "standard-workers" }
[ℹ]  building cluster stack "eksctl-kikito-cluster"
[ℹ]  deploying stack "eksctl-kikito-cluster"
[ℹ]  building nodegroup stack "eksctl-kikito-nodegroup-standard-workers"
[ℹ]  deploying stack "eksctl-kikito-nodegroup-standard-workers"
[✔]  all EKS cluster resource for "kikito" had been created
[✔]  saved kubeconfig as "/home/nicolas/.kube/config"
[ℹ]  adding role "arn:aws:iam::546848686991:role/eksctl-kikito-nodegroup-standard-NodeInstanceRole-1U99CTMQIU71V" to auth ConfigMap
[ℹ]  nodegroup "standard-workers" has 0 node(s)
[ℹ]  waiting for at least 1 node(s) to become ready in "standard-workers"
[ℹ]  nodegroup "standard-workers" has 2 node(s)
[ℹ]  node "ip-192-168-41-222.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-83-167.eu-west-3.compute.internal" is not ready
[ℹ]  kubectl command should work with "/home/nicolas/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "kikito" in "eu-west-3" region is ready

➜  ~ kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-192-168-41-222.eu-west-3.compute.internal   Ready    <none>   2m18s   v1.14.6-eks-5047ed
ip-192-168-83-167.eu-west-3.compute.internal   Ready    <none>   2m18s   v1.14.6-eks-5047ed

Install etcd

setup helm

~ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
➜  ~ kubectl -n kube-system create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
➜  ~ helm init --service-account tiller
$HELM_HOME has been configured at /home/nicolas/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
➜  ~ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /home/nicolas/.helm.
➜  ~ helm repo update     
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
➜  ~ kubectl get pods --namespace kube-system
NAME                             READY   STATUS    RESTARTS   AGE
aws-node-d244t                   1/1     Running   0          37m
aws-node-df8sx                   1/1     Running   0          37m
coredns-585b7dd4c-lwn4n          1/1     Running   0          42m
coredns-585b7dd4c-rd8cj          1/1     Running   0          42m
kube-proxy-rrv72                 1/1     Running   0          37m
kube-proxy-vncct                 1/1     Running   0          37m
tiller-deploy-7f4d76c4b6-rcn9t   1/1     Running   0          2m24s

install etcd operator

helm install --name etcd-operator stable/etcd-operator --namespace compose
NAME:   etcd-operator
LAST DEPLOYED: Wed Sep 11 09:25:04 2019
NAMESPACE: compose
STATUS: DEPLOYED

➜  ~ kubectl apply -f etcd.yml 
etcdcluster.etcd.database.coreos.com/compose-etcd created

➜  ~ kubectl get pods --namespace compose
NAME                                                              READY   STATUS    RESTARTS   AGE
compose-etcd-dv4l4f6z4q                                           1/1     Running   0          12s
compose-etcd-hl568pzcgf                                           1/1     Running   0          29s
etcd-operator-etcd-operator-etcd-backup-operator-88d6bc55ct9qbm   1/1     Running   0          7m25s
etcd-operator-etcd-operator-etcd-operator-56c55d965f-vnbmz        1/1     Running   0          7m25s
etcd-operator-etcd-operator-etcd-restore-operator-55f6ccbftsjs5   1/1     Running   0          7m25s

install compose-on-kube

~ Downloads/installer-linux -namespace=compose -etcd-servers=http://compose-etcd-client:2379
INFO[0000] Checking installation state                  
INFO[0000] Install image with tag "latest" in namespace "compose" 
INFO[0001] Api server: image: "docker/kube-compose-api-server:latest", pullPolicy: "Always" 
INFO[0001] Controller: image: "docker/kube-compose-controller:latest", pullPolicy: "Always"~ kubectl get pods --namespace compose
NAME                                                              READY   STATUS    RESTARTS   AGE
compose-7885b6584d-ks494                                          1/1     Running   0          21s
compose-api-d64b76b9c-dtdd5                                       1/1     Running   0          21s
compose-etcd-cwpz2gkpv2                                           1/1     Running   0          23m
compose-etcd-dv4l4f6z4q                                           1/1     Running   0          23m
compose-etcd-hl568pzcgf                                           1/1     Running   0          23m
etcd-operator-etcd-operator-etcd-backup-operator-88d6bc55ct9qbm   1/1     Running   0          30m
etcd-operator-etcd-operator-etcd-operator-56c55d965f-vnbmz        1/1     Running   0          30m
etcd-operator-etcd-operator-etcd-restore-operator-55f6ccbftsjs5   1/1     Running   0          30m


➜  ~ kubectl api-versions | grep compose
compose.docker.com/v1alpha3
compose.docker.com/v1beta1
compose.docker.com/v1beta2

Deploy app

~ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml hellokube
Ignoring unsupported options: build

service "db": build is ignored
service "words": build is ignored
service "web": build is ignored
Waiting for the stack to be stable and running...
db: Ready               [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready              [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready            [pod status: 1/5 ready, 4/5 pending, 0/5 failed]

Stack hellokube is stable and running

➜  ~ kubectl get pods --all-namespaces 
NAMESPACE     NAME                                                              READY   STATUS    RESTARTS   AGE
compose       compose-7885b6584d-ks494                                          1/1     Running   0          2m54s
compose       compose-api-d64b76b9c-dtdd5                                       1/1     Running   0          2m54s
compose       compose-etcd-cwpz2gkpv2                                           1/1     Running   0          25m
compose       compose-etcd-dv4l4f6z4q                                           1/1     Running   0          26m
compose       compose-etcd-hl568pzcgf                                           1/1     Running   0          26m
compose       etcd-operator-etcd-operator-etcd-backup-operator-88d6bc55ct9qbm   1/1     Running   0          33m
compose       etcd-operator-etcd-operator-etcd-operator-56c55d965f-vnbmz        1/1     Running   0          33m
compose       etcd-operator-etcd-operator-etcd-restore-operator-55f6ccbftsjs5   1/1     Running   0          33m
default       db-6ddc44cb59-zpjxx                                               1/1     Running   0          40s
default       web-78ffbff454-twn5v                                              1/1     Running   0          40s
default       words-79bf8c7cc8-4zbn7                                            1/1     Running   0          40s
default       words-79bf8c7cc8-5pcxq                                            1/1     Running   0          40s
default       words-79bf8c7cc8-l42v6                                            1/1     Running   0          40s
default       words-79bf8c7cc8-p8f4p                                            1/1     Running   0          40s
default       words-79bf8c7cc8-spmh4                                            1/1     Running   0          40s
kube-system   aws-node-d244t                                                    1/1     Running   0          71m
kube-system   aws-node-df8sx                                                    1/1     Running   0          71m
kube-system   coredns-585b7dd4c-lwn4n                                           1/1     Running   0          76m
kube-system   coredns-585b7dd4c-rd8cj                                           1/1     Running   0          76m
kube-system   kube-proxy-rrv72                                                  1/1     Running   0          71m
kube-system   kube-proxy-vncct                                                  1/1     Running   0          71m
kube-system   tiller-deploy-7f4d76c4b6-rcn9t                                    1/1     Running   0          36m
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment