This is done on devstack
environment.
- YouTube video: https://www.youtube.com/watch?v=H2Eqw33m2S0
- https://github.com/tormath1/tormath1.github.io/blob/main/kcd-munich-flatcar-capi.pdf
$ kind create cluster
$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.3.2/clusterctl-linux-amd64 -o clusterctl
$ sudo install -o root -g root -m 0755 clusterctl /opt/bin/clusterctl
$ wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
$ sudo install -o root -g root -m 0755 yq_linux_amd64 /opt/bin/yq
$ export EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION=true
$ clusterctl init --infrastructure openstack
$ wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
Download the clouds.yml
from the horizon dashboard (NOTE: you need to add the password
of the user in the auth
section).
source /tmp/env.rc ./clouds.yml openstack
export OPENSTACK_DNS_NAMESERVERS=8.8.8.8
# FailureDomain is the failure domain the machine will be created in. (nova for devstack base setup)
export OPENSTACK_FAILURE_DOMAIN=nova
# The flavor reference for the flavor for your server instance.
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=m1.medium
# The flavor reference for the flavor for your server instance.
export OPENSTACK_NODE_MACHINE_FLAVOR=m1.medium
# The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
# if you use image from the image-builder (and not the sysext approach)
# export OPENSTACK_FLATCAR_IMAGE_NAME=flatcar-stable-capi
export FLATCAR_IMAGE_NAME=flatcar-stable
# The SSH key pair name
export OPENSTACK_SSH_KEY_NAME=<insert-a-ssh-key-name>
# The external network (can be the ID of the public network by default)
export OPENSTACK_EXTERNAL_NETWORK_ID=""
Once the environment variable filled:
$ clusterctl generate cluster capi-quickstart --flavor flatcar-sysext --kubernetes-version v1.30.3 --control-plane-machine-count=1 --worker-machine-count=3 > capi-quickstart.yaml
Optionally, you can enable SSH access to the nodes:
cat > ssh.yaml <<EOF
---
# Allow the SSH access for demo purposes.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
securityGroups:
- filter:
name: ssh
EOF
cat > kustomization.yaml <<EOF
resources:
- capi-quickstart.yaml
patches:
- path: ssh.yaml
target:
kind: OpenStackMachineTemplate
EOF
$ kubectl kustomize ./ --output capi-quickstart.yaml
You should now be ready to deploy:
$ kubectl apply -f ./capi-quickstart.yaml
As it's a devstack instance (not an actual OpenStack deployment), you might need to use sshuttle
for your management cluster to reach the workload cluster:
sshuttle -r root@<devstack-instance-ip> 172.24.4.0/24 -l 0.0.0.0
Now, you can deploy the external cloud provider using the template:
$ export CLUSTER_NAME=capi-quickstart
$ clusterctl get kubeconfig ${CLUSTER_NAME} --namespace default > ./${CLUSTER_NAME}.kubeconfig
# deploy CNI
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://docs.projectcalico.org/archive/v3.23/manifests/calico.yaml
# get the helper to create the cloud secret
$ git clone --depth 1 https://github.com/kubernetes-sigs/cluster-api-provider-openstack
$ cluster-api-provider-openstack/templates/create_cloud_conf.sh ./clouds.yml openstack > /tmp/cloud.conf
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig create secret -n kube-system generic cloud-config --from-file=/tmp/cloud.conf
$ rm /tmp/cloud.conf
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
$ kubectl get nodes -A --kubeconfig=./${CLUSTER_NAME}.kubeconfig
NAME STATUS ROLES AGE VERSION
capi-quickstart-control-plane-vfrc2 Ready control-plane 6m57s v1.27.2
capi-quickstart-md-0-d4tpp Ready <none> 2m4s v1.27.2
capi-quickstart-md-0-q4p9q Ready <none> 2m4s v1.27.2
capi-quickstart-md-0-v67ks Ready <none> 2m6s v1.27.2
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes -o yaml | yq ".items[0].status.nodeInfo"
architecture: amd64
bootID: ea4b7d37-5c40-4a6c-bc75-0bb935153133
containerRuntimeVersion: containerd://1.6.21
kernelVersion: 5.15.117-flatcar
kubeProxyVersion: v1.27.2
kubeletVersion: v1.27.2
machineID: 9b49d00542524301ae2fc399a11ea75a
operatingSystem: linux
osImage: Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo)
systemUUID: 9b49d005-4252-4301-ae2f-c399a11ea75a
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://k8s.io/examples/application/deployment-update.yaml
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pods -l app=nginx