管理ノードのIPアドレスが192.168.0.210、ワーカーノードのIPアドレスが192.168.0.211と192.168.0.212とした例です。
3台のサーバーにはUbuntu Server 22.04LTSをインストールしIPアドレスを固定に設定する。
このドキュメントは以下のバージョンに関する手順を示したものです。
- KUBERNETES 1.26
- CONTAINERD 1.6.16
- UBUNTU 22.04
sudo -s
printf "\n192.168.0.211 kadmin\n192.168.0.211 kworker1\n192.168.0.212 kworker2\n\n" >> /etc/hosts
printf "overlay\nbr_netfilter\n" >> /etc/modules-load.d/containerd.conf
modprobe overlay
modprobe br_netfilter
printf "net.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\n" >> /etc/sysctl.d/99-kubernetes-cri.conf
sysctl --system
wget https://github.com/containerd/containerd/releases/download/v1.6.16/containerd-1.6.16-linux-amd64.tar.gz -P /tmp/
tar Cxzvf /usr/local /tmp/containerd-1.6.16-linux-amd64.tar.gz
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -P /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now containerd
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64 -P /tmp/
install -m 755 /tmp/runc.amd64 /usr/local/sbin/runc
wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz -P /tmp/
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin /tmp/cni-plugins-linux-amd64-v1.2.0.tgz
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml <<<<<<<<<<< manually edit and change systemdCgroup to true
systemctl restart containerd
swapoff -a
nano /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-dGjacd5BxgkHcUejwpFu1FWksW96OsTMcT8x33MYTmOJe9UMahHiyBI0iOgUuQ03 / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/92b65b7c-f91f-4b8d-9d60-aa7d155ee450 /boot ext4 defaults 0 1
#/swap.img none swap sw 0 0
apt-get update
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
reboot
sudo -s
apt-get install -y kubelet=1.26.1-00 kubeadm=1.26.1-00 kubectl=1.26.1-00
apt-mark hold kubelet kubeadm kubectl
check swap config, ensure swap is 0
free -m
total used free shared buff/cache available
Mem: 1975 938 78 2 959 873
Swap: 0 0 0
kubeadm init --pod-network-cidr 10.10.0.0/16 --kubernetes-version 1.26.1 --node-name kadmin
sudo rm /etc/containerd/config.toml
sudo systemctl restart containerd
kubeadm join 192.168.0.210:6443 --token zkb8bp.h7ra1j5kcs02hqex \
--discovery-token-ca-cert-hash sha256:022e4ab90d673db3ed2514628333d5704e9a925f9e1b8bb69cad278b5c1d67ff
参考URL : https://docs.tigera.io/calico/3.25/getting-started/kubernetes/quickstart
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/custom-resources.yaml
custom-resources.yamlを編集してcidrを修正する。(kubeadminを実行したときに指定したcidrにする)
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.10.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
kubectl apply -f custom-resources.yaml
ワーカーノードがクラスターに参加するコマンドを知りたい場合は次のようにする
kubeadm token create --print-join-command
kubeadm join 192.168.0.210:6443 --token v6e5r3.agpeuzskbo9wle68 --discovery-token-ca-cert-hash sha256:022e4ab90d673db3ed2514628333d5704e9a925f9e1b8bb69cad278b5c1d67ff
kubeadm join 192.168.0.210:6443 --token zkb8bp.h7ra1j5kcs02hqex \
--discovery-token-ca-cert-hash sha256:022e4ab90d673db3ed2514628333d5704e9a925f9e1b8bb69cad278b5c1d67ff
管理ノードで以下のコマンドを実行する
# kubectl get nodes
以下のような出力が得られたら正常に動作している
NAME STATUS ROLES AGE VERSION
kadmin Ready control-plane 93m v1.26.1
kworker1 Ready <none> 64m v1.26.1
kworker2 Ready <none> 49m v1.26.1
エラーになった場合は次の作業を行う
- Kubernetesの環境変数をエクスポートする
export KUBECONFIG=/etc/kubernetes/admin.conf
または
export KUBECONFIG=$HOME/.kube/config
- ホームディレクトリにconfigファイルがあるかを確認してなければコピーして作る
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
ここで動作確認をして問題なければ、管理ノード起動時に環境変数がエキスポートされるように設定する
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc