Provisioning Kubernetes Cluster BareMetal with KubeSpray
2x Ubuntu Server 20.04LTS
- Master
- Memory: 1500 MB
- Node
- Memory: 1024 MB
NODE 1 = 192.168.17.136
NODE 2 = 192.168.17.131
- Just do it on your first Node.
nano /etc/hosts
192.168.17.136 node1
192.168.17.131 node2
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
apt install python3-pip
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
git checkout master #if you want to change version
sudo pip3 install -r requirements.txt
cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(192.168.17.136 192.168.17.131)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
CHANGE if NEEDED.
cat inventory/mycluster/group_vars/all/all.yml
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.21.9
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
container_manager: docker
You can declare your multiple endpoint to become acceptable VIP on
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
---
supplementary_addresses_in_ssl_keys
---
Do this at root.
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
kubectl get nodes
https://cloudolife.com/2021/08/28/Kubernetes-K8S/Kubespray/Use-Kubespray-to-add-or-remove-control-plane-master-node-into-the-exist-kubernetes-K8S-cluster/#:~:text=master%20%20%203d16h%20%20%20v1.21.4-,Add%20a%20new%20control%2Dplane%2Cmaster%20node,-Add%20the%20new
- Example
hosts.yaml
for add newWorker
all:
hosts:
node1:
ansible_host: 172.20.2.34
ip: 172.20.2.34
access_ip: 172.20.2.34
node2:
ansible_host: 172.20.2.35
ip: 172.20.2.35
access_ip: 172.20.2.35
children:
kube_control_plane:
hosts:
node1:
kube_node:
hosts:
node1:
node2:
etcd:
hosts:
node1:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}
ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml --limit=node2
ansible-playbook -i inventory/mycluster/hosts.yaml remove-node.yml -e node=node3
- for Kubespray, we must upgraded one by one minor version. Kubespray not supported if we upgrade major version directly.
- You can see k8s version in https://kubernetes.io/releases/
Upgrade from v1.24.7 to v1.26.3
cd kubespray
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.25.5
ansible-playbook cluster.yml -i inventory/mycluster/hosts.yaml -e kube_version=v1.25.5 -e upgrade_cluster_setup=true
cd kubespray
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.26.3
ansible-playbook cluster.yml -i inventory/mycluster/hosts.yaml -e kube_version=v1.26.3 -e upgrade_cluster_setup=true
kubectl get nodes -o wide
https://github.com/kubernetes-sigs/kubespray
- Error after restart node
Get runtime version failed" err="get remote runtime typed version failed: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory\"
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service
- Node not found
Jul 06 17:27:49 k8s3-master kubelet[80133]: E0706 17:27:49.660120 80133 kubelet.go:2466] "Error getting node" err="node \"k8s3-master\" not found"
cd kube-spray
declare -a IPS=(192.168.17.136 192.168.17.131)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
- Downgrade from v1.26.3 to v.1.25.5
- Kubespray not guarantee can downgrade cluster (may be failed)
nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_version: v1.25.5
ansible-playbook cluster.yml -i inventory/mycluster/hosts.yaml -e kube_version=v1.25.5 -e upgrade_cluster_setup=true
ansible-playbook -i inventory/mycluster/hosts.yaml reset.yml --become --become-user=root
When use docker for container manager on Kubernetes, we may automate delete dangling image for free up our storage periodicly user docker image prune. This is a good reference that we may follow.
firewalld off, ufw off, ubuntu 20.04.6 (apt update -y && apt upgrade -y). Dont know why
FAILED - RETRYING: kubeadm | Initialize first master (3 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (2 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (1 retries left).