In this today's session, we are going to deploy kubernetes (1master, 1worker) with libvirt.
- Hypervisor CPU: 8CPU+ (master1:4, worker1: 4)
- Hypervisor memory: 16G
- User has ssh-key (please create
ssh-keygen
if you don't have) - Install ansible (Run
pip install ansible --user
to install ansible) - Install kubectl into hypervisor
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin
note: we can of course execute this from another machine (by ssh), but we use kube2 from one machine in today's session.
$ git clone https://github.com/s1061123/kube2.git
$ cd kube2
$ ansible-galaxy install -r requirements.yml
Starting galaxy role install process
- extracting ansible-role-libvirt-host to /home/tohayash/work/kube2/roles/ansible-role-libvirt-host
- ansible-role-libvirt-host (v1.8.0) was installed successfully
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Skipping 'community.general' as it is already installedvi inventory/virthost.inventory
$ vi inventory/virthost.inventory
remove '#' from first line
$ cat inventory/virthost.inventory
cat inventory/virthost.inventory
virt_host ansible_connection=local
[virthost]
virt_host
$ vi group_vars/virthost
(change any config you want to setup VM, such as VM RAM, CPU)
$ vi group_vars/all
(change any config you want to install kubernetes, such as container_runtime)
ansible-playbook -i inventory/virthost.inventory 01_setup_env.yml
PLAY [virthost] ********************************************************************************************************
(snip)
ansible-playbook -i inventory/virthost.inventory 02_setup_vm.yml
PLAY [virthost] ********************************************************************************************************
(snip)
$ virsh list
Id Name State
------------------------------
1 kube-node-1 running
2 kube-master1 running
$ ssh kube-master1
Warning: Permanently added '192.168.122.98' (ED25519) to the list of known hosts.
Last login: Mon Apr 18 23:19:50 2022 from 192.168.122.1
[fedora@kube-master1 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:82:ef:b8 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
inet 192.168.122.238/24 brd 192.168.122.255 scope global dynamic noprefixroute eth0
valid_lft 3209sec preferred_lft 3209sec
inet6 fe80::5cf5:802d:b342:9e51/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:12:52:4e brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname ens4
inet 10.1.1.204/24 brd 10.1.1.255 scope global dynamic noprefixroute eth1
valid_lft 3209sec preferred_lft 3209sec
inet6 fe80::516f:1184:b105:63c0/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[fedora@kube-master1 ~]$ exit
logout
Connection to 192.168.122.98 closed.
[tohayash@tohayash-lab kube2]$ ssh kube-node-1
Warning: Permanently added '192.168.122.7' (ED25519) to the list of known hosts.
[fedora@kube-node-1 ~]$ exit
logout
Connection to 192.168.122.7 closed.
If you cannot login machine, you need to check SSH config and terraform.
// check ssh config (generated by kube2)
$ cat ~/.ssh/conf.d/kube_hosts
Host kube-master1
Hostname 192.168.122.98
User fedora
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Host kube-node-1
Hostname 192.168.122.7
User fedora
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Under tf
directory, we have everything to launch the VM (Fedora cloud image and terraform, including terraform binary)
$ cd tf
$ ./terraform show
(snip)
Outputs:
kube-master1 = [
"192.168.122.98",
]
kube-node-1 = [
"192.168.122.7",
]
Note: Here inventory file for ansible-playbook is different, inventory/vms.local.generated
!!!
ansible-playbook -i inventory/vms.local.generated 03_kube_install.yml
(snip)
TASK [show message] ****************************************************************************************************
ok: [kube-master1] => {
"msg": "export KUBECONFIG=/home/tohayash/work/kube2/kubeconfig loads cluster config!"
}
PLAY RECAP *************************************************************************************************************
kube-master1 : ok=51 changed=37 unreachable=0 failed=0 skipped=43 rescued=0 ignored=0
kube-node-1 : ok=30 changed=24 unreachable=0 failed=0 skipped=41 rescued=0 ignored=0
Its bottleneck in the script is dnf update
. So you update image with latest Fedora cloud image from koji then it may make it faster.
$ export KUBECONFIG=/home/tohayash/work/kube2/kubeconfig
$ kubectl get node
NAME STATUS ROLES AGE VERSION
kube-master1 NotReady control-plane,master 3m5s v1.23.5
kube-node-1 NotReady <none> 59s v1.23.5
CNI path: /etc/cni/net.d
CNI bin path: /opt/cni/bin
(if you use container_runtime: crio
with release, then CNI path might be /usr/libexec/cni
)
Kubeadm config: /root/kubeadm.cfg
Kubeadm log: /var/log/kubeadm.init.log
ansible-playbook -i inventory/virthost.inventory 99_teardown_vms.yml
If you want to deploly again (with same Fedora image), then
ansible-playbook -i inventory/virthost.inventory 02_setup_vm.yml
ansible-playbook -i inventory/vms.local.generated 03_kube_install.yml
If you want to deploy again (with different VM image)
ansible-playbook -i inventory/virthost.inventory 01_setup_env.yml
ansible-playbook -i inventory/virthost.inventory 02_setup_vm.yml
ansible-playbook -i inventory/vms.local.generated 03_kube_install.yml
In addition, I measure the time of kube2 deployment, from get the server from equinix metal, to finish kubernetes deployment).
Environment
Summary
Logs