Skip to content

Instantly share code, notes, and snippets.

@tormath1
Last active April 28, 2024 11:30
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tormath1/eef833300f2cc8ea79d5ce3bf126f311 to your computer and use it in GitHub Desktop.
Save tormath1/eef833300f2cc8ea79d5ce3bf126f311 to your computer and use it in GitHub Desktop.
Cluster API OpenStack using Flatcar

Cluster API OpenStack using Flatcar

This is done on devstack environment.

Resources

From an existing Kubernetes cluster deployed with Kind:

$ kind create cluster
$ cat > .cluster-api/clusterctl.yaml<<EOF
providers:
  - name: openstack
    url: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/releases/v0.8.0-alpha.0/infrastructure-components.yaml
    type: InfrastructureProvider
EOF
$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.3.2/clusterctl-linux-amd64 -o clusterctl
$ sudo install -o root -g root -m 0755 clusterctl /opt/bin/clusterctl
$ wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
$ sudo install -o root -g root -m 0755 yq_linux_amd64 /opt/bin/yq
$ export EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION=true
$ clusterctl init --infrastructure openstack
$ wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc

Download the clouds.yml from the horizon dashboard (NOTE: you need to add the password of the user in the auth section).

source /tmp/env.rc ./clouds.yml openstack
export OPENSTACK_DNS_NAMESERVERS=8.8.8.8
# FailureDomain is the failure domain the machine will be created in. (nova for devstack base setup)
export OPENSTACK_FAILURE_DOMAIN=nova
# The flavor reference for the flavor for your server instance.
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=m1.medium
# The flavor reference for the flavor for your server instance.
export OPENSTACK_NODE_MACHINE_FLAVOR=m1.small
# The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
export OPENSTACK_FLATCAR_IMAGE_NAME=flatcar-stable-capi
# The SSH key pair name
export OPENSTACK_SSH_KEY_NAME=<insert-a-ssh-key-name>
# The external network
export OPENSTACK_EXTERNAL_NETWORK_ID=""

Regarding flatcar-stable-capi image

It's built with the image-builder. Just run make OEM_ID=openstack build-qemu-flatcar. During the import into devstack, select "qcow2" as image format.

Deploying

Once the environment variable filled:

$ clusterctl generate cluster capi-quickstart --flavor flatcar --kubernetes-version v1.27.2 --control-plane-machine-count=1 --worker-machine-count=3 > capi-quickstart.yaml
$ kubectl apply -f ./capi-quickstart.yml

Now, you can deploy the external cloud provider using the template:

$ export CLUSTER_NAME=capi-quickstart
$ clusterctl get kubeconfig ${CLUSTER_NAME} --namespace default > ./${CLUSTER_NAME}.kubeconfig
# deploy CNI
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://docs.projectcalico.org/archive/v3.23/manifests/calico.yaml
# get the helper to create the cloud secret
$ git clone --depth 1 https://github.com/kubernetes-sigs/cluster-api-provider-openstack
$ cluster-api-provider-openstack/templates/create_cloud_conf.sh ./clouds.yml openstack > /tmp/cloud.conf
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig create secret -n kube-system generic cloud-config --from-file=/tmp/cloud.conf
$ rm /tmp/cloud.conf
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
$ kubectl get nodes -A --kubeconfig=./${CLUSTER_NAME}.kubeconfig
NAME                                  STATUS   ROLES           AGE     VERSION
capi-quickstart-control-plane-vfrc2   Ready    control-plane   6m57s   v1.27.2
capi-quickstart-md-0-d4tpp            Ready    <none>          2m4s    v1.27.2
capi-quickstart-md-0-q4p9q            Ready    <none>          2m4s    v1.27.2
capi-quickstart-md-0-v67ks            Ready    <none>          2m6s    v1.27.2
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get nodes -o yaml | yq ".items[0].status.nodeInfo"
architecture: amd64
bootID: ea4b7d37-5c40-4a6c-bc75-0bb935153133
containerRuntimeVersion: containerd://1.6.21
kernelVersion: 5.15.117-flatcar
kubeProxyVersion: v1.27.2
kubeletVersion: v1.27.2
machineID: 9b49d00542524301ae2fc399a11ea75a
operatingSystem: linux
osImage: Flatcar Container Linux by Kinvolk 3510.2.4 (Oklo)
systemUUID: 9b49d005-4252-4301-ae2f-c399a11ea75a
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig apply -f https://k8s.io/examples/application/deployment-update.yaml
$ kubectl --kubeconfig=./${CLUSTER_NAME}.kubeconfig get pods -l app=nginx

Troubleshooting

  • If you want to inspect a node, you can reboot it from the console, type "e" to access the grub and set flatcar.autologin in the cmdline.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment