Skip to content

Instantly share code, notes, and snippets.

@gigovich
Last active November 24, 2020 15:07
Show Gist options
  • Save gigovich/ff1f0b4e98e25d4eef3967642502034a to your computer and use it in GitHub Desktop.
Save gigovich/ff1f0b4e98e25d4eef3967642502034a to your computer and use it in GitHub Desktop.
Setup K8S Cluster

Install K8S cluster

This guide describes step by step instruction how to setup k8s cluster on the single physical server (host machine) with qemu (libvirt) virtual machines and single public IP.

Development node setup

On the development node, you should setup VM manager, SSH with key acccess to host node, and kubectl.

Create base virtual machinge

Login to the host machine through SSH. Let's setup qemu and download Ubuntu server image:

$ ssh root@<hostmachine>
hostmachine# sudo apt-install -y qemu

Download Ubuntu server image and copy to libvirt images folder

hostmachine# wget https://releases.ubuntu.com/20.04.1/ubuntu-20.04.1-live-server-amd64.iso
hostmachine# cp ubuntu-20.04.1-live-server-amd64.iso /var/lib/libvirt/images/

Run on the local dev VM manager. Open menu File -> Add connection enable Connect to remote host over SSH and set Username with Hostname of hostmachine, leave Hypervisor as QEMU/KVM.

After click Connect you will see under QEMU/KVM:<hostmachine> all VMs on the hostmachine, it will be empty for first run. Right click, to that line and chose New. On the first step, Connection should be set to QEMU/KVM:<hostmachine>, option how to install OS Local install media... On the second step, click to Browse... and add folder /var/lib/libvirt/images on the hostmachine. There you will find ubuntu-20.04.1-live-server-amd64.iso, select it and click Choose Volume. In the Choose the operating system... type ubuntu and select latest available version from the autocomplete list.

On the next page, set 4GB memory limit, and 2 CPUs. On the disk settings - 25 GB. On the last page set name: kubenode1 Click finish, and start install Ubuntu server.

During installation ubuntu will ask you username and key import. Remember you will use this user and key to future setup k8s. So it will be better if you use your regular key (for example from Github) and regular user.

Clone virtual machines

After base virtual machine will be crated, shutdown it from VM manager UI list. Right click Open, and on the bar, click to Manage VM snapshots. Let's create one snapshot of fresh installed VM.

Close this VM view window, go back to VM's list: Right click to this VM in the list, and choose contenxt menu Clone....

Let's make clone of this VM. Leave everythin as is, change only name of the VM: kubenode2. After creating this VM, don't start it. Make 2 more clones, kubenode3 and kubenode4.

Set VMs persistent IP's

Libvirt uses bridge network interface and NAT. IP addresses for VM's will be set by hypervisor DHCP. We need to make this IP's persistent. Everything we need for it VM name and it's MAC address. You can open each VP in the VM manager and copy MAC address from the network information tab.

When you will have full list of MAC addresses, lets make some networks setup in the qemu. We can do it from VM manager, but for me it was not worked stable and persistent. So let's login to hostmachine through SSH and use virsh cli to setup VM's network.

hostmachine# virsh
virsh #

Now you are in the qemu cli, so exec for each VMs command according this template:

virsh # net-update default add ip-dhcp-host \
"<host mac='<MAC_ADDRESS>' name='<VM_NAME>' ip='<IP_ADDRESS>' />" \
--live --config

IP_ADDRESS should be from the range which you can ge by:

virsh # net-dump --network default
<network>
...
    <ip address='....
        <dhcp>
            <range start='<START>' end='<END>'/>
...

Apply net-update command for all VMs with corresponding mac and ip values. After that you can run all VM's from VM manager. They should have IP's which you set through net-update command.

Change VMs hostnames

Clonned VMs have same hostname kubenode1. Open one by one console to each VM from the VM manager and change hostname to corresponding value (kubenode2, kubenode3, kubenode4):

kubenode1$ sudo -i
# hostnamectl set-hostname <kubenode#>
# vim /etc/hosts
# reboot

Do this on all VMs except first.

Install Ansible and Kubespray

We use Kubespray ansible playbooks to install K8S on the VM's. The simples way to use it is to run it from hostmachine, becase we have access from it directly to VM's by their IP's. SSH hostmachine and start:

ssh <hostmachine>
# pip3 install ansible
# git clone ssh://git@github.com/kubernetes-sigs/kubespray
# cd kubespray
# pip3 install -r requirements.txt

Configure Kubespray and run deploy

We need to set some values in for the Kubespray playbooks. First of all copy sample playbooks as our cluster setup:

# cp -rfp inventory/sample inventory/<CLUSTER_NAME>

Replace <CLUSTER_NAME> with nonspace name. After that declare VM's IPs list and configure ansible inventory:

# declare -a IPS=(<KUBENODE1_IP>, <KUBENODE2_IP> <KUBENODE3_IP> <KUBENODE4_IP>)
# CONFIG_FILE=inventory/<CLUSTER_NAME>/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Replce <KUBENODE#_IP> with corresponding value of VM's IP and <CLUSTER_NAME>. We need also to change cat inventory/<CLUSTER_NAME>/group_vars/all/all.yml file, to allow use external LB. Uncomment ## External LB example config section, and set corresponding values:

...
## External LB example config
apiserver_loadbalancer_domain_name: <HOSTMACHINE_DOMAIN_NAME> # example "lb-kube.devinlab.com"
loadbalancer_apiserver:
  address: <HOSTMACHINE_PUBLIC_IP> # example 95.216.18.139 
  port: 8383 # this port will be used by kubectl API
...

Also disable internal LB, find loadbalancer_apiserver_localhost: setting and set it false:

...
loadbalancer_apiserver_localhost: false
...

Now we are ready to fire installation. Replace <USER> by username which you created during setup ubuntu in the VM's.

ansible-playbook -i inventory/<CLUSTER_NAME>/hosts.yaml --user <USER> --become --become-user=root --ask-become-pass cluster.yml

After playbook finished - you will have K8S cluster.

Make snapshots of VMs (OPTIONAL)

To play with cluster and have rollback option, right after cluster will be created, go to the VMs list in the VM manager, and make snapshot for each kubenode VM. So to reset the cluster in the future, you can stop all VM's start them again from this snapshot.

Setup kubectl

We need kubectl as tool to manage our K8S cluster. We use it on our dev environment. We need config file from one of the master node of our k8s cluster to setup kubectl. Let's get it in two steps.

First copy /etc/kubernetes/admin.conf from first master node (kubenode1 VM) to our hostmachine. Run this commands on your hostmachine:

# ssh <USER>@<KUBENODE1_IP>
kubenode1$ sudo cp /etc/kubernetes/admin.conf /home/<USER>/admin.conf
kubenode1$ sudo chown <USER>.<USER> /home/<USER>/admin.conf
kubenode1$ exit
# scp <USER>@<KUBENODE1_IP>:~/admin.conf ~/

Second copy admin.conf from hostmachine to your dev environment. Exec this commands on the local dev:

$ mkdir -p ~/.kube
$ scp <hostmachine>:~/admin.conf ~/.kube/config

And finally we can check our k8s setup from our dev environment:

$ kubectl cluster-info 
Kubernetes master is running at https://<HOSTMACHINE_DOMAIN_NAME>:8383

Install K8S "package manager" Helm

K8S has package manager Helm. It can be used to install nginx ingress controller and cert manager

Please look docs and choose install method of Helm for you dev node distro.

Install Ingress controller

After that we can install nginx ingress controller, which provides domain service mapping. And we send traffic to it's controller in the k8s cluster:

$ helm repo add nginx-stable https://helm.nginx.com/stable
$ helm repo update
$ helm install nginx-ingress nginx-stable/nginx-ingress --set controller.service.type=NodePort

Now, helm chart release for nignx ingress controller, with nginx-ingress name created. You can check it:

$ kubectl get svc
NAME                                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
nginx-ingress-ingress-nginx-controller             NodePort    10.233.39.67    <none>        80:32513/TCP,443:30086/TCP   5m21s
nginx-ingress-ingress-nginx-controller-admission   ClusterIP   10.233.23.233   <none>        443/TCP                      5m21s
kubernetes                                         ClusterIP   10.233.0.1      <none>        443/TCP                      23h

Ports 32513 and 30086 interests us, we put them in the HAproxy external load balancer.

Create external LB

This tutorial uses HAproxy as external load balancer for K8S cluster. Let's setup in on the hostmachine:

# apt-get install -y haproxy

Now we should change settings in the /etc/haproxy/haproxy.cfg remove all listen configs and the end of the file, and add config. Replace <EXTERNAL_IP> and <KUBENODE#_IP> by coresponding values, with porst which we get from ingress installation (32513 for HTTP and 30086 for HTTPS):

listen kubernetes-apiserver-https
  bind <EXTERNAL_IP>:8383
  mode tcp
  option log-health-checks
  timeout client 3h
  timeout server 3h
  server master1 <KUBENODE1_IP>:6443 check check-ssl verify none inter 10000
  server master2 <KUBENODE2_IP>:6443 check check-ssl verify none inter 10000
  balance roundrobin


frontend http_front
  bind <EXTERNAL_IP>:80
  mode tcp
  default_backend http_back

frontend https_front
  bind <EXTERNAL_IP>:443
  mode tcp
  default_backend https_back


backend http_back
  balance roundrobin
  mode tcp
  server http_worker1 <KUBENODE3_IP>:32513
  server http_worker2 <KUBENODE4_IP>:32513
  server http_worker3 <KUBENODE5_IP>:32513

backend https_back
  balance roundrobin
  mode tcp
  server https_worker1 <KUBENODE3_IP>:30086
  server https_worker2 <KUBENODE4_IP>:30086
  server https_worker2 <KUBENODE5_IP>:30086

After you save this file, reload haproxy configuration: systemctl reload haproxy

Install cert manager

To obtain certificates from different sources K8S has great cert manager. Which works like a charm and help use Let's encrytp CA without any problems. Let's setup it though Helm. It should be installed in separate namespce:

$ kubectl create namespace cert-manager
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm --namespace cert-manager install cert-manager jetstack/cert-manager --version v1.0.4 --set installCRDs=true

Verify installation: kubectl get pods --namespace cert-manager

And add two Let's encrypt Issuer-s, one for stage (to test) and second for prod. For staging. Don't forget to replace <YOUR_EMAIL>:

$ cat << EOF >> issuer-acme-staging.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: <YOUR_EMAIL>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class:  nginx
EOF
$ kubectl apply -f issuer-acme-staging.yaml

And for prod. Don't forget to replace <YOUR_EMAIL>:

$ cat << EOF >> issuer-acme-staging.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: <YOUR_EMAIL>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx
EOF
$ kubectl apply -f issuer-acme-prod.yaml

Install Kubernetes Dashboard (OPTIONAL)

Kubernetes Dashboard will be first application which we deploy in the our clusster, generate certificate for it and publish it. This will be first workable exmple which shows that we have full featured working cluster.

Install it's from from Helm:

$ helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
$ helm repo update
$ helm install dashboard kubernetes-dashboard/kubernetes-dashboard

After that, you will see dashboard-kubernetes-dashboard in the services list: kubectl get svc

And the last step, add ingress for it, which will proxy requests to this service and also obtain certs. Replace placeholders:

  • <ACME_ENV>: can be "letsencrypt-staging" (to play with service) and "letsencrypt-prod" to obtain valid certificate when you setup everything correctly.
  • <DOMAIN_FOR_DASHBOARD> any public domain to bind it to your k8s dashboard.
$ cat << EOF >> ingress-dashboard-resource.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    cert-manager.io/issuer: <ACME_ENV>
spec:
  tls:
  - hosts:
    - <DOMAIN_FOR_DASHBOARD>
    secretName: dashboard-k8s-tls

  rules:
  - host: <DOMAIN_FOR_DASHBOARD>
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: dashboard-kubernetes-dashboard
            port:
              number: 443
EOF
$ kubectl apply -f ingress-dashboard-resource.yaml

DONE! Now you can navigate to your browser to https://<DOMAIN_FOR_DASHBOARD> and get k8s dashboard login window.

Create login user

We create service account and append it cluster-admin ClusterRole. Create service account:

$ cat << EOF >> service-account-dashboard.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
EOF
$ kubectl create -f service-account-dashboard.yaml

Append it to cluster-admin:

$ cat << EOF >> cluster-role-binding-dashboard.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: default
EOF
$ kubectl create -f cluster-role-binding-dashboard.yaml

Ok, copy token from output of command below, and paste it into login window:

kubectl describe secret $(kubectl get secret | grep admin-user | awk '{print $1}')

Congrats! Your are in your K8S cluster dashboard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment