Skip to content

Instantly share code, notes, and snippets.

@smford22
Last active August 14, 2021 17:24
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save smford22/106ce64634335aaa72d93062dcd6cdfb to your computer and use it in GitHub Desktop.
Save smford22/106ce64634335aaa72d93062dcd6cdfb to your computer and use it in GitHub Desktop.
Kubernetes Setup on Ubuntu 18.04

Setting Up Kubernetes on Ubuntu 18.04

This document is an overview of setting up a Kubernetes cluster on three Ubuntu 18.04 nodes. The three nodes are described as follows with the following components installed:

Kubernetes Components

Kubernetes is made up of multiple components that make up a cluster. Many of the components are actually pods in the cluster running under kube-system. The following components make up the k8s control plane

etcd

etcd is a synchronized data store for storing and sharing the cluster state across nodes in the cluster

kube-apiserver

Serves as the kubernetes API. Any kubectl commands hit the API

kube-controller-manager

Bundles a variety of components that handle the backend of the cluster

kube-scheduler

Schedules pods to run on individual nodes

In addition to the components above there are also a couple of additional components that run on each node:

kubelet

An agent that runs as a systemd service on each node

kube-proxy

Handles netowkr communication betwen nodes by adding firewall routing rules

  • Kube Master

    • Docker
    • kubeadm
    • kubelet
    • kubectl
    • Control Plane
  • Kube Node 1

    • Docker
    • kubeadm
    • kubelet
    • kubectl
  • Kube Node 2

    • Docker
    • kubeadm
    • kubelet
    • kubectl

kubeadm - This is a tool which autoamtes a large portion of the process of setting up a k8s cluster

kubelet - The essential compontent of kubernetes that handles running containers on a node. Every server that will be running containers needs kubelet

kubectl - command line tool for interacting with the cluster once it is up.

Install Docker

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

sudo apt-get update

sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu

sudo apt-mark hold docker-ce

Install kubeadm, kubelet, and kubectl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

sudo apt-get update

sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00

sudo apt-mark hold kubelet kubeadm kubectl

Bootstrapping the Cluster

Now we are ready to use kubeadm to build the cluster!

Initialize the cluster on the master server:

sudo kubeadn init --pod-network-cidr=10.244.0.0/16

The special pod network cidr is a setting that will be needed later for the flannel networking plugin.

Setup kubeconfig for the local user on the master

This will allow you to use kubectl when logged into the master

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Join nodes to the cluster

The kubeadm init command should have provides a kubeadm join command in the output...

sudo kubeadm join $controller_ip:6443 --token $token --discovery-token-ca-cert-hash $hash

Copy that command from the kube master and run it with sudo on the kube nodes.

Verify the cluster is setup

Now let's try to verify that our cluster is setup properly. From the kube master, get a list of nodes with kubectl

kubectl get nodes

The output should look something like this...

NAME                         STATUS     ROLES    AGE   VERSION
smford221c.mylabserver.com   NotReady   master   21m   v1.12.7
smford222c.mylabserver.com   NotReady   <none>   12m   v1.12.7
smford223c.mylabserver.com   NotReady   <none>   11m   v1.12.7

It is expected at this point the nodes will have a NotReady status

Configuring Networking with Flannel

flannel is a virtual network that gives a subnet to each host for use with container runtimes.

Platforms like Google's Kubernetes assume that each container (pod) has a unique, routable IP inside the cluster. The advantage of this model is that it reduces the complexity of doing port mapping.

Turn on net.bridge.bridge-nf-call-iptables on all three nodes

For networking to work, you will need to turn on net.bridge.bridge-nf-call-iptables on all three nodes:

echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Install Flannel on the Kube Master

On the Kube Master, use kubectl to install Flannel using a YAML template:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Verify the cluster is now ready

Verify that all the nodes now have a STATUS of Ready:

kubectl get nodes

You should see something like this...

NAME                         STATUS   ROLES    AGE   VERSION
smford221c.mylabserver.com   Ready    master   45m   v1.12.7
smford222c.mylabserver.com   Ready    <none>   37m   v1.12.7
smford223c.mylabserver.com   Ready    <none>   35m   v1.12.7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment