Skip to content

Instantly share code, notes, and snippets.

@diyan
Created August 10, 2017 20:25
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save diyan/c49703e25fb88767b89a17a1580eb6d2 to your computer and use it in GitHub Desktop.
Save diyan/c49703e25fb88767b89a17a1580eb6d2 to your computer and use it in GitHub Desktop.
Setup Kubernetes cluster using kops provisioning tool

Setup Kubernetes cluster using kops provisioning tool

Install kops, kubectl on workstation

$ wget -O ~/.local/bin/kops https://github.com/kubernetes/kops/releases/download/1.7.0/kops-linux-amd64
$ chmod +x ~/.local/bin/kops

$ wget -O ~/.local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.7.2/bin/linux/amd64/kubectl
$ chmod +x ~/.local/bin/kubectl

Follow kops tutorial

URL: https://github.com/kubernetes/kops/blob/master/docs/aws.md

# Ops team did setup dedicated user and group
$ aws iam create-group --group-name kops
$ aws iam create-user --user-name kops
$ aws iam add-user-to-group --user-name kops --group-name kops

# Verify that user and group are exists
$ aws iam list-groups-for-user --user-name=kops --query='Groups[].GroupName'
[
    "kops"
]

# Setup AWS 3S bucket for kops to store cluster state
$ export AWS_DEFAULT_PROFILE=kops@acme
$ export AWS_DEFAULT_REGION=us-east-1
$ aws s3api create-bucket --bucket alexd-kops-state-store --region us-east-1
$ aws s3api put-bucket-versioning --bucket alexd-kops-state-store  --versioning-configuration Status=Enabled

# Let kops prepare cluster plan and store it on S3 bucket
export AWS_ACCESS_KEY_ID=secret
export AWS_SECRET_ACCESS_KEY=secret
export NAME=alexd-kops.k8s.local
export KOPS_STATE_STORE=s3://alexd-kops-state-store

$ kops create cluster \
    --cloud=aws \
    --zones=us-east-1a \
    --ssh-public-key=~/.ssh/alexey_diyan_at_acme.pub \
    ${NAME}

# Review/customize cluster configuration
kops edit cluster ${NAME}

Minimal/default kops configuration is following

  • 1x VPC, 1x Subnet, 1x AZ
  • 1x ELB
  • 1x AutoscalingGroup for Masters, 1/1 min/max nodes, m3.medium, Debian Jessie, 64GB root gp2 type, PublicIP
  • 1x AutoscalingGroup for Nodes, 2/2 min/max nodes, t2.medium, Debian Jesse, 128GB root gp2 type, PublicIP
  • 2x EBSVolume for etcd-events and etcd-main, 2x 20GB, gp2 type
  • IAMInstanceProfile, IAMInstanceProfileRole, IAMRole, IAMRolePolicy
  • InternetGateway TODO
  • Keypair... TODO
  • 8x ManagedFile, yaml
  • Route, RouteTable, RouteTableAssociation, SSHKey
  • 9x Secret
  • 3x SecurityGroup for ELB, masters, nodes; 13x SecurityGroupRule

Build/run cluster

# This start an actual provisioning process; be ready to start paying for the resources
$ kops update cluster ${NAME} --yes

# Wait about 5 mins and then validate cluster
$ kops validate cluster
Using cluster from kubectl context: alexd-kops.k8s.local

Validating cluster alexd-kops.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	m3.medium	1	1	us-east-1a
nodes			Node	t2.medium	2	2	us-east-1a

NODE STATUS
NAME				ROLE	READY
ip-172-20-46-114.ec2.internal	node	True
ip-172-20-50-62.ec2.internal	master	True
ip-172-20-59-153.ec2.internal	node	True

Your cluster alexd-kops.k8s.local is ready

Terminate cluster

# Preview AWS resources that will be terminated with cluster
$ kops delete cluster --name ${NAME}

# Exectute an actual termination
$ kops delete cluster --name ${NAME} --yes

Kubernetes usage

# Review nodes; they are just EC2 instances
$ kubectl get nodes
NAME                            STATUS    AGE       VERSION
ip-172-20-46-114.ec2.internal   Ready     2m        v1.7.0
ip-172-20-50-62.ec2.internal    Ready     3m        v1.7.0
ip-172-20-59-153.ec2.internal   Ready     1m        v1.7.0

# Review system pods; pod is a group of tightly coupled Docker containers
$ kubectl --namespace=kube-system get pods
NAME                                                   READY     STATUS    RESTARTS   AGE
dns-controller-3497129722-3kpj7                        1/1       Running   0          8m
etcd-server-events-ip-172-20-50-62.ec2.internal        1/1       Running   0          8m
etcd-server-ip-172-20-50-62.ec2.internal               1/1       Running   0          8m
kube-apiserver-ip-172-20-50-62.ec2.internal            1/1       Running   0          8m
kube-controller-manager-ip-172-20-50-62.ec2.internal   1/1       Running   0          8m
kube-dns-479524115-8fw3d                               3/3       Running   0          7m
kube-dns-479524115-jqdqc                               3/3       Running   0          8m
kube-dns-autoscaler-1818915203-rxdrr                   1/1       Running   0          8m
kube-proxy-ip-172-20-46-114.ec2.internal               1/1       Running   0          7m
kube-proxy-ip-172-20-50-62.ec2.internal                1/1       Running   0          8m
kube-proxy-ip-172-20-59-153.ec2.internal               1/1       Running   0          8m
kube-scheduler-ip-172-20-50-62.ec2.internal            1/1       Running   0          8m

# Run redis
$ kubectl run redis --image=redis:4.0.1-alpine --restart=Never
pod "redis" created

# Review user pods
$ kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
redis     1/1       Running   0          1h

# Tail redis logs
$ kubectl logs -f redis
1:C 10 Aug 18:54:31.455 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 10 Aug 18:54:31.455 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 10 Aug 18:54:31.455 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 10 Aug 18:54:31.456 * Running mode=standalone, port=6379.
1:M 10 Aug 18:54:31.456 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 10 Aug 18:54:31.456 # Server initialized
1:M 10 Aug 18:54:31.456 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 10 Aug 18:54:31.456 * Ready to accept connections

# Forward redis port to local worksation
$ kubectl port-forward redis 6379
Forwarding from 127.0.0.1:6379 -> 6379
Forwarding from [::1]:6379 -> 6379
Handling connection for 6379

# Check that port forwarding works
$ telnet localhost 6379
Trying ::1...
Connected to localhost.
Escape character is '^]'.
INCR hello-world
:1
INCR hello-world
:2
^]
telnet> quit
Connection closed.

# Run command inside an existing container
$ kubectl exec redis -- cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.6.2
PRETTY_NAME="Alpine Linux v3.6"
HOME_URL="http://alpinelinux.org"
BUG_REPORT_URL="http://bugs.alpinelinux.org"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment