This document is an overview of setting up a Kubernetes cluster on three Ubuntu 18.04 nodes. The three nodes are described as follows with the following components installed:
Kubernetes is made up of multiple components that make up a cluster. Many of the components are actually pods in the cluster running under kube-system
. The following components make up the k8s control plane
etcd is a synchronized data store for storing and sharing the cluster state across nodes in the cluster
Serves as the kubernetes API. Any kubectl
commands hit the API
Bundles a variety of components that handle the backend of the cluster
Schedules pods to run on individual nodes
In addition to the components above there are also a couple of additional components that run on each node:
An agent that runs as a systemd service on each node
Handles netowkr communication betwen nodes by adding firewall routing rules
-
Kube Master
- Docker
- kubeadm
- kubelet
- kubectl
- Control Plane
-
Kube Node 1
- Docker
- kubeadm
- kubelet
- kubectl
-
Kube Node 2
- Docker
- kubeadm
- kubelet
- kubectl
kubeadm
- This is a tool which autoamtes a large portion of the process of setting up a k8s cluster
kubelet
- The essential compontent of kubernetes that handles running containers on a node. Every server that will be running containers needs kubelet
kubectl
- command line tool for interacting with the cluster once it is up.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
sudo apt-mark hold docker-ce
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
sudo apt-mark hold kubelet kubeadm kubectl
Now we are ready to use kubeadm
to build the cluster!
sudo kubeadn init --pod-network-cidr=10.244.0.0/16
The special
pod network cidr
is a setting that will be needed later for the flannel networking plugin.
This will allow you to use kubectl
when logged into the master
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
The
kubeadm init
command should have provides akubeadm join
command in the output...
sudo kubeadm join $controller_ip:6443 --token $token --discovery-token-ca-cert-hash $hash
Copy that command from the kube master and run it with sudo
on the kube nodes.
Now let's try to verify that our cluster is setup properly. From the kube master, get a list of nodes with kubectl
kubectl get nodes
The output should look something like this...
NAME STATUS ROLES AGE VERSION
smford221c.mylabserver.com NotReady master 21m v1.12.7
smford222c.mylabserver.com NotReady <none> 12m v1.12.7
smford223c.mylabserver.com NotReady <none> 11m v1.12.7
It is expected at this point the nodes will have a
NotReady
status
flannel is a virtual network that gives a subnet to each host for use with container runtimes.
Platforms like Google's Kubernetes assume that each container (pod) has a unique, routable IP inside the cluster. The advantage of this model is that it reduces the complexity of doing port mapping.
For networking to work, you will need to turn on net.bridge.bridge-nf-call-iptables
on all three nodes:
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
On the Kube Master, use kubectl
to install Flannel using a YAML template:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
Verify that all the nodes now have a STATUS
of Ready
:
kubectl get nodes
You should see something like this...
NAME STATUS ROLES AGE VERSION
smford221c.mylabserver.com Ready master 45m v1.12.7
smford222c.mylabserver.com Ready <none> 37m v1.12.7
smford223c.mylabserver.com Ready <none> 35m v1.12.7