Skip to content

Instantly share code, notes, and snippets.

@jei0486
Last active February 27, 2024 09:33
Show Gist options
  • Save jei0486/9f7f68c5140def7f5defdd5bd52b93c8 to your computer and use it in GitHub Desktop.
Save jei0486/9f7f68c5140def7f5defdd5bd52b93c8 to your computer and use it in GitHub Desktop.
lxc
# Install k8s after installing LXD for Kubernetes training
---
## 1. Install LXC packge and init LXD on Ubuntu 20.04 / 22.04
```bash
$ sudo apt-get update && sudo apt-get install lxc -y
$ sudo systemctl status lxc
$ lxd init
**Provide default option for all except this:**
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: dir
...
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
```
---
## 1.1 How to completely remove resources related to LXD
```bash
# Usage
lxc list
lxc delete <whatever came from list>
lxc image list
lxc image delete <whatever came from list>
lxc network list
lxc network delete <whatever came from list>
echo ‘{“config”: {}}’ | lxc profile edit default
lxc storage volume list default
lxc storage volume delete default <whatever came from list>
lxc storage delete default
If the profile you're already using doesn't delete storage or network, delete the profile and change the default profile to {} as well
$ lxc network list
+---------+----------+---------+-------------+---------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY |
+---------+----------+---------+-------------+---------+
| docker0 | bridge | NO | | 0 |
+---------+----------+---------+-------------+---------+
| enp2s0 | physical | NO | | 0 |
+---------+----------+---------+-------------+---------+
| lxcbr0 | bridge | NO | | 0 |
+---------+----------+---------+-------------+---------+
| lxdbr0 | bridge | YES | | 1 |
+---------+----------+---------+-------------+---------+
```
---
# 2. Clonning kubernetes playground repository that script helper in Kubernetes configuration
```bash
git clone https://github.com/justmeandopensource/kubernetes.git
```
---
### For ubuntu 20.04
```bash
# checkout with commit for` `ubuntu 20.04
git checkout b974cd05578bef31cea4286a5970af92ae5c75da
```
---
## 3. Create a profile for k8s cluster
If you install and use multiple pods, it is recommended to change the memory to 8GB.
### ubuntu 20.04
**k8s-profile-config**
```yaml
config:
limits.cpu: "2"
limits.memory: 8GB
limits.memory.swap: "false"
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw\nlxc.mount.entry = /dev/kmsg dev/kmsg none defaults,bind,create=file"
security.privileged: "true"
security.nesting: "true"
description: LXD profile for Kubernetes
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: k8s
used_by: []
```
---
### ubuntu 22.04
```yaml
config:
limits.cpu: "2"
limits.memory: 8GB
limits.memory.swap: "false"
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw"
security.privileged: "true"
security.nesting: "true"
description: LXD profile for Kubernetes
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
kmsg:
path: /dev/kmsg
source: /dev/kmsg
type: unix-char
root:
path: /
pool: default
type: disk
name: k8s
used_by: []
```
---
### create k8s profile
```bash
$ cd lxd-provisioning
$ lxc profile create k8s
$ cat k8s-profile-config | lxc profile edit k8s
$ lxc profile list
$ lxc profile list
+---------+---------+
| NAME | USED BY |
+---------+---------+
| default | 0 |
+---------+---------+
| k8s | 0 |
+---------+---------+
```
---
## 4. Create nodes for k8s cluster
ubuntu 20.04
```bash
$ lxc launch ubuntu:20.04 m1 --profile k8s
Creating m1
Starting m1
$ lxc launch ubuntu:20.04 w1 --profile k8s
Creating w1
Starting w1
$ lxc launch ubuntu:20.04 w2 --profile k8s
Creating w2
Starting w2
```
---
```bash
### ubuntu 22.04
$ lxc launch ubuntu:22.04 m1 --profile k8s
```
---
### check the lxc list
```bash
$ lxc list
+------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| m1 | RUNNING | 10.172.14.166 (eth0) | fd42:c5d2:89b5:c806:216:3eff:fe4f:b5a6 (eth0) | CONTAINER | 0 |
+------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| w1 | RUNNING | 10.172.14.108 (eth0) | fd42:c5d2:89b5:c806:216:3eff:fe7f:f81 (eth0) | CONTAINER | 0 |
+------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| w2 | RUNNING | 10.172.14.144 (eth0) | fd42:c5d2:89b5:c806:216:3eff:feb0:46aa (eth0) | CONTAINER | 0 |
+------+---------+----------------------+-----------------------------------------------+-----------+-----------+
```
---
## 5. Run bootstrap script on all node.
### 5.1 setup timezone and ntp client
```bash
# connect through terminal
lxc exec m1 bash
# Localtime 심볼릭 링크 재설정
sudo ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
sudo apt-get update && sudo apt install ntp -y
sudo vi /etc/ntp.conf
server ntp-1.wrs.com iburst
sudo systemctl restart ntp
ntpq -p
```
---
## 5.2 Add and apply rootca.sh script for https access on WR network on each nodes.
**rootca.sh**
```bash
#!/bin/bash
# This script has been tested on Ubuntu 20.04
# For other versions of Ubuntu, you might need some tweaking
echo "[TASK 0] Install rootca for windriver network"
apt update > /dev/null 2>&1
apt install libnss3-tools ca-certificates > /dev/null 2>&1
mkdir /usr/local/share/ca-certificates/extra > /dev/null 2>&1
wget -O or cp (your rootCA.crt file) /usr/local/share/ca-certificates/extra/rootCA.crt > /dev/null 2>&1
update-ca-certificates > /dev/null 2>&1
reboot
```
---
```bash
$ cat rootca.sh | lxc exec m1 bash
[TASK 0] Install rootca for windriver network
$ lxc restart m1
$ cat rootca.sh | lxc exec w1 bash
[TASK 0] Install rootca for windriver network
$ lxc restart w1
$ cat rootca.sh | lxc exec w2 bash
[TASK 0] Install rootca for windriver network
$ lxc restart w2
```
---
## 5.3 Copy your ssh public key on each nodes (Ubuntu 22.04 only)
echo "your .ssh/id_rsa.pub" to /root/.ssh/authrized_keys
---
## 5.4 Run bootstrap script
```bash
$ cat bootstrap-kube.sh | lxc exec kmaster bash
[TASK 0] Install essential packages
[TASK 1] Install containerd runtime
[TASK 2] Add apt repo for kubernetes
[TASK 3] Install Kubernetes components (kubeadm, kubelet and kubectl)
[TASK 4] Enable ssh password authentication
[TASK 5] Set root password
[TASK 6] Install additional packages
[TASK 7] Pull required containers
[TASK 8] Initialize Kubernetes Cluster
[TASK 9] Copy kube admin config to root user .kube directory
[TASK 10] Deploy Flannel network
[TASK 11] Generate and save cluster join command to /joincluster.sh
$ cat bootstrap-kube.sh | lxc exec kworker1 bash
[TASK 0] Install essential packages
[TASK 1] Install containerd runtime
[TASK 2] Add apt repo for kubernetes
[TASK 3] Install Kubernetes components (kubeadm, kubelet and kubectl)
[TASK 4] Enable ssh password authentication
[TASK 5] Set root password
[TASK 6] Install additional packages
[TASK 7] Join node to Kubernetes Cluster
$ cat bootstrap-kube.sh | lxc exec kworker2 bash
[TASK 0] Install essential packages
[TASK 1] Install containerd runtime
[TASK 2] Add apt repo for kubernetes
[TASK 3] Install Kubernetes components (kubeadm, kubelet and kubectl)
[TASK 4] Enable ssh password authentication
[TASK 5] Set root password
[TASK 6] Install additional packages
[TASK 7] Join node to Kubernetes Cluster
$ lxc list
+----------+---------+------------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+------------------------+-----------------------------------------------+-----------+-----------+
| kmaster | RUNNING | 10.244.0.1 (cni0) | fd42:8f7f:8fc9:22c5:216:3eff:fe1f:6ab5 (eth0) | CONTAINER | 0 |
| | | 10.244.0.0 (flannel.1) | | | |
| | | 10.178.27.54 (eth0) | | | |
+----------+---------+------------------------+-----------------------------------------------+-----------+-----------+
| kworker1 | RUNNING | 10.244.1.1 (cni0) | fd42:8f7f:8fc9:22c5:216:3eff:fe24:1b8f (eth0) | CONTAINER | 0 |
| | | 10.244.1.0 (flannel.1) | | | |
| | | 10.178.27.109 (eth0) | | | |
+----------+---------+------------------------+-----------------------------------------------+-----------+-----------+
| kworker2 | RUNNING | 10.244.2.1 (cni0) | fd42:8f7f:8fc9:22c5:216:3eff:fe10:12a3 (eth0) | CONTAINER | 0 |
| | | 10.244.2.0 (flannel.1) | | | |
| | | 10.178.27.139 (eth0) | | | |
+----------+---------+------------------------+-----------------------------------------------+-----------+-----------+
```
---
## 5.4 Using kubelx script instead of 4 and 5.4 (not recommended)
```bash
. To apply rootca when using kubelx script, make the following changes.
lxd-provisioning$ git diff
diff --git a/lxd-provisioning/kubelx b/lxd-provisioning/kubelx
index 1fbb41e..6d1ac3d 100755
--- a/lxd-provisioning/kubelx
+++ b/lxd-provisioning/kubelx
@@ -18,6 +18,11 @@ kubeprovision()
echo "==> Bringing up $node"
lxc launch ubuntu:20.04 $node --profile k8s
sleep 10
+ echo "==> update root ca $node"
+ cat rootca.sh | lxc exec $node bash
+ sleep 1
+ lxc restart $node
+ sleep 1 echo "==> Running provisioner script"
cat bootstrap-kube.sh | lxc exec $node bash
echo @@ -26,6 +31,7 @@ kubeprovision()
kubedestroy() {
+ lxc profile delete k8s
for node in $NODES
do
echo "==> Destroying $node..."
```
---
Expand source
# 6. Verify
## 6.1 exec into kmaster node
`$ lxc exec kmaster bash`
---
## 6.2 Verifying nodes
```bash
root@kmaster:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kmaster Ready control-plane,master 16d v1.22.0 10.178.27.54 <none> Ubuntu 20.04.6 LTS 5.13.0-41-generic containerd://1.7.2
kworker1 Ready <none> 16d v1.22.0 10.178.27.109 <none> Ubuntu 20.04.6 LTS 5.13.0-41-generic containerd://1.7.2
kworker2 Ready <none> 16d v1.22.0 10.178.27.139 <none> Ubuntu 20.04.6 LTS 5.13.0-41-generic containerd://1.7.2
root@kmaster:~# cat /joincluster.sh
kubeadm join 10.178.27.54:6443 --token vwwvvi.4lzn38jjlqb1peda --discovery-token-ca-cert-hash sha256:9cc89f16286832a0865fe4c1e38a28c9fd0f406fec486836aede02f726160005 --ignore-preflight-errors=all
```
---
## 6.3 Verifying cluster version
```bash
$ lxc exec kmaster bash`
root@kmaster:~# kubectl cluster-info`
Kubernetes control plane is running at https://10.178.27.54:6443`
CoreDNS is running at https://10.178.27.54:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy`
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.`
```
---
# 7. Trouble shooting
## 7.1 Check your firewall
Enable specific ports or disable the firewall
`$ sudo ufw status`
`$ sudo ufw disable`
---
## 7.2 swap memory
```bash
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
```
---
## 7.3 ETC
Check that lxc's filesystem is not dir
If not, see 1 above and re-do lxd init
```bash
$ lxc storage list
+---------+--------+------------------------------------------------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+------------------------------------------------+-------------+---------+---------+
| default | dir | /var/snap/lxd/common/lxd/storage-pools/default | | 5 | CREATED |
+---------+--------+------------------------------------------------+-------------+---------+---------+
```
---
Check if rootca is properly installed when using windriver network
```bash
root@kmaster:~# curl https://google.com
```
---
Check that cgroup2 is enabled when using ubuntu 22.04 or higher
```bash
```
`$ stat -fc %T /sys/fs/cgroup/`
`cgroup2fs`
---
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers
If you are using ubuntu 22.04 or higher, adjust the value on host pc if kube-proxy fails as shown below.
```bash
root@kmaster:~# kubectl logs kube-proxy-725n4 -n kube-system
I0219 09:50:28.922759 1 server_others.go:72] "Using iptables proxy"
I0219 09:50:28.927433 1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["10.127.18.11"]
I0219 09:50:28.929750 1 conntrack.go:118] "Set sysctl" entry="net/netfilter/nf_conntrack_max" value=262144
E0219 09:50:28.929765 1 server.go:556] "Error running ProxyServer" err="open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
E0219 09:50:28.929773 1 run.go:74] "command failed" err="open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
root@kmaster:~#
```
---
adjust the "net_conntrack_max" value on host pc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment