Skip to content

Instantly share code, notes, and snippets.

@guilhermelinhares
Last active April 25, 2024 17:49
Show Gist options
  • Save guilhermelinhares/9e93264c80b75098ac68e2dee06d62ee to your computer and use it in GitHub Desktop.
Save guilhermelinhares/9e93264c80b75098ac68e2dee06d62ee to your computer and use it in GitHub Desktop.
Cluster k8s multi nodes

References :

Setup Cluster

Configure Container runtimes

Containers runtimes support

  • containerd
  • CRI-O
  • Docker Engine
  • Mirantis Container Runtime

In this tutorial, the method use is containerd

  • Configure kernel sysctl containerd
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
modprobe br_netfilter ip_vs ip_vs_rr ip_vs_sh ip_vs_wrr nf_conntrack_ipv4 overlay
sudo modprobe overlay
sudo modprobe br_netfilter

sysctl params required by setup, params persist across reboots

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system
  • Install containerned
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update

sudo apt install -y containerd.io
  • Data perssitence containerd
mkdir -p /etc/containerd

  • Default configuration containerd
containerd config default > /etc/containerd/config.toml
  • Turn On systemdCgroup containerd
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
  • Enabled and restart containerd service
systemctl enable containerd
systemctl restart containerd

Install Kubectl + Kubeadm + Kubelet

https://gist.github.com/guilhermelinhares/c06853c0565c1b02f4c98b1c209e13a4

  • Update the apt package index and install packages needed to use the Kubernetes apt repository:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
  • Add the Kubernetes apt repository:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  • Update apt package index, install kubelet, kubeadm and kubectl, and pin their version:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Cluster Init

Disabled Swaff in all nodes

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

  • Download cluster images (control-plane)
kubeadm config images pull --cri-socket unix:///var/run/containerd/containerd.sock
  • Add KUBELET_EXTRA_ARGS
apt install -yq jq
IPADDR="$(ip --json a s | jq -r '.[] | if .ifname == "eth0" then .addr_info[] | if .family == "inet" then .local else empty end else empty end')"
cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS=--node-ip=$IPADDR
EOF
  • Initialize cluster (control-plane)
# IPADDR adicionando acima
NODENAME=$(hostname -s)
POD_CIDR="192.168.0.0/16"
kubeadm init --apiserver-advertise-address=$IPADDR  --apiserver-cert-extra-sans=$IPADDR  --pod-network-cidr=$POD_CIDR --node-name $NODENAME
W0228 15:21:30.614424  110094 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local hostname] and IPs [10.96.0.1 10.1.3.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost hostname] and IPs [ip 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost hostname] and IPs [ip 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.501311 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
8c8ca6cb1ef3b5
[mark-control-plane] Marking the node hostname as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node hostname as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: gi5tqk.cmht
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join hostname:6443 --token gi5tqk.cmht \
	--discovery-token-ca-cert-hash sha256:717ca771d5659734a274b4c25a7cde7d31df4fd846 \
	--control-plane --certificate-key 8c8ca6cab0ad5a77c2bd3758312d2

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join hostname:6443 --token gi5tqk.cmht \
	--discovery-token-ca-cert-hash sha256:717ca771d5659734a274b4c25a7cde7d31df4
  • Setup cluster normal user config
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • Setup Network Policy Provider

https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/ https://www.weave.works/docs/net/latest/kubernetes/kube-addon/

#Install weave net
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

# Check pod network
kubectl get pods -n kube-system

image

  • Join nodes in control-plane
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join hostname:6443 --token gi5tqk.cmht \
	--discovery-token-ca-cert-hash sha256:717ca771d5659734a274b4c25a7cde7d31df4

  • Check nodes in control-plane After join in all nodes, check if node ready
kubectl get node

#Check where pods exec in all nodes
kubectl get pods --all-namespaces -o wide
  • Assign Label to node
kubectl label node NODE_NAME node-role.kubernetes.io/worker=worker
node/NODE_NAME labeled

Install Nginx Ingress Controller

https://kubernetes.github.io/ingress-nginx/deploy/

  • Install BareMetal Nginx Ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml
  • checks pods in the ingress-nginx namespace:
kubectl get pods --namespace=ingress-nginx
  • After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=120s

Install LoadBalancer BareMetal

https://metallb.org/installation/

  • Install Metallb loadbalancer
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
  • Wait until the MetalLB pods (controller and speakers) are ready:
kubectl wait --namespace metallb-system \
                --for=condition=ready pod \
                --selector=app=metallb \
                --timeout=90s
  • Get Nodes IP
kubectl get nodes -o wide

image

  • Configure range LB
cat << EOF | kubectl create -f - 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: config
  namespace: metallb-system
spec:
  addresses:
  - 10.1.5.10-10.1.5.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system
EOF
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment