Skip to content

Instantly share code, notes, and snippets.

@gilangvperdana
Last active June 23, 2024 05:39
Show Gist options
  • Save gilangvperdana/886bc80cefdcd1be7ea356e41fa2871d to your computer and use it in GitHub Desktop.
Save gilangvperdana/886bc80cefdcd1be7ea356e41fa2871d to your computer and use it in GitHub Desktop.
Kubernetes with KubeSpray on BareMetal Ubuntu Server 20.04 LTS

Kubernetes with KubeSpray on BareMetal Ubuntu Server 20.04 LTS

Provisioning Kubernetes Cluster BareMetal with KubeSpray

Environment

2x Ubuntu Server 20.04LTS

Minimum RAM

  • Master
    • Memory: 1500 MB
  • Node
    • Memory: 1024 MB

LAB Topology

NODE 1 = 192.168.17.136
NODE 2 = 192.168.17.131

Installation

  • Just do it on your first Node.

Add All IP Node to Hosts Config

nano /etc/hosts
192.168.17.136 node1
192.168.17.131 node2

Generate SSH Keygen

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2

Install Dependencies (python3-pip)

apt install python3-pip

Clone Project & Prepare Dependencies

git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
git checkout master #if you want to change version

sudo pip3 install -r requirements.txt
cp -rfp inventory/sample inventory/mycluster

Declare IP Node

declare -a IPS=(192.168.17.136 192.168.17.131)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

See General Configuration

CHANGE if NEEDED.
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

To specify Kuberenetes Version (OPTIONAL)

nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.21.9

To specify Container Runtime (OPTIONAL)

nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
container_manager: docker

Running Behind NAT / Multiple Endpoint (OPTIONAL)

You can declare your multiple endpoint to become acceptable VIP on

nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

---
supplementary_addresses_in_ssl_keys
---

Run Ansible to Install

Do this at root.
ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml

Check

kubectl get nodes

Add Node

https://cloudolife.com/2021/08/28/Kubernetes-K8S/Kubespray/Use-Kubespray-to-add-or-remove-control-plane-master-node-into-the-exist-kubernetes-K8S-cluster/#:~:text=master%20%20%203d16h%20%20%20v1.21.4-,Add%20a%20new%20control%2Dplane%2Cmaster%20node,-Add%20the%20new
  • Example hosts.yaml for add new Worker
all:
  hosts:
    node1:
      ansible_host: 172.20.2.34
      ip: 172.20.2.34
      access_ip: 172.20.2.34
    node2:
      ansible_host: 172.20.2.35
      ip: 172.20.2.35
      access_ip: 172.20.2.35
  children:
    kube_control_plane:
      hosts:
        node1:
    kube_node:
      hosts:
        node1:
        node2:
    etcd:
      hosts:
        node1:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml --limit=node2

Delete Node

ansible-playbook -i inventory/mycluster/hosts.yaml remove-node.yml -e node=node3

Kubernetes Upgrade

  • for Kubespray, we must upgraded one by one minor version. Kubespray not supported if we upgrade major version directly.
  • You can see k8s version in https://kubernetes.io/releases/
Upgrade from v1.24.7 to v1.26.3
cd kubespray
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.25.5
ansible-playbook cluster.yml -i inventory/mycluster/hosts.yaml -e kube_version=v1.25.5 -e upgrade_cluster_setup=true

cd kubespray
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.26.3
ansible-playbook cluster.yml -i inventory/mycluster/hosts.yaml -e kube_version=v1.26.3 -e upgrade_cluster_setup=true
kubectl get nodes -o wide

Source

https://github.com/kubernetes-sigs/kubespray

Error

  • Error after restart node
Get runtime version failed" err="get remote runtime typed version failed: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory\"
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service
  • Node not found
Jul 06 17:27:49 k8s3-master kubelet[80133]: E0706 17:27:49.660120   80133 kubelet.go:2466] "Error getting node" err="node \"k8s3-master\" not found"
cd kube-spray
declare -a IPS=(192.168.17.136 192.168.17.131)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml

Downgrade Cluster

  • Downgrade from v1.26.3 to v.1.25.5
  • Kubespray not guarantee can downgrade cluster (may be failed)
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.25.5
ansible-playbook cluster.yml -i inventory/mycluster/hosts.yaml -e kube_version=v1.25.5 -e upgrade_cluster_setup=true

Reset cluster

ansible-playbook -i inventory/mycluster/hosts.yaml reset.yml --become --become-user=root

Automate free up your storage with delete docker image dangling

When use docker for container manager on Kubernetes, we may automate delete dangling image for free up our storage periodicly user docker image prune. This is a good reference that we may follow.

Migrate Container Runtime

Alternative Deployer

@heroes1412
Copy link

firewalld off, ufw off, ubuntu 20.04.6 (apt update -y && apt upgrade -y). Dont know why
FAILED - RETRYING: kubeadm | Initialize first master (3 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (2 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (1 retries left).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment