Skip to content

Instantly share code, notes, and snippets.

@squidpickles
Last active January 31, 2024 12:48
Show Gist options
  • Star 93 You must be signed in to star a gist
  • Fork 17 You must be signed in to fork a gist
  • Save squidpickles/dda268d9a444c600418da5e1641239af to your computer and use it in GitHub Desktop.
Save squidpickles/dda268d9a444c600418da5e1641239af to your computer and use it in GitHub Desktop.
Multi-platform (amd64 and arm) Kubernetes cluster

Multiplatform (amd64 and arm) Kubernetes cluster setup

The official guide for setting up Kubernetes using kubeadm works well for clusters of one architecture. But, the main problem that crops up is the kube-proxy image defaults to the architecture of the master node (where kubeadm was run in the first place).

This causes issues when arm nodes join the cluster, as they will try to execute the amd64 version of kube-proxy, and will fail.

It turns out that the pod running kube-proxy is configured using a DaemonSet. With a small edit to the configuration, it's possible to create multiple DaemonSets—one for each architecture.

Steps

Follow the instructions at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for setting up the master node. I've been using Weave Net as the network plugin; it seems to work well across architectures, and was easy to set up. Just be careful that you pass through an IPALLOC_RANGE to the Weave configuration that matches your --pod-network-cidr, if you used that in your kubeadm init. Stop once you have the network plugin installed, before you add any nodes.

My workflow looks like:

sudo kubeadm init --pod-network-cidr 10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.244.0.0/16"

Now, edit the kube-proxy DaemonSet by running

kubectl edit daemonset kube-proxy --namespace=kube-system

I made the following change:

--- daemonset-orig.yaml	2018-01-27 00:16:15.319098008 -0800
+++ daemonset-amd64.yaml	2018-01-27 00:15:37.839511917 -0800
@@ -47,6 +47,8 @@
           readOnly: true
       dnsPolicy: ClusterFirst
       hostNetwork: true
+      nodeSelector:
+        kubernetes.io/arch: amd64
       restartPolicy: Always
       schedulerName: default-scheduler
       securityContext: {}

You'll need to add the following to the configuration under spec: template: spec:

nodeSelector:
    kubernetes.io/arch: amd64

While you're still in the editor, copy the configuration file somewhere you can find it, and name it daemonset-arm.yaml; you'll be creating another one for arm nodes. Save and exit, and your changes will be applied.

You'll need to remove some of the metadata fields from the file. The main thing to note is the changes to the name (in metadata), the container image, and the nodeSelector:

--- daemonset-amd64.yaml	2018-01-27 00:15:37.839511917 -0800
+++ daemonset-arm.yaml	2018-01-26 23:50:31.484332549 -0800
@@ -1,19 +1,10 @@
 apiVersion: extensions/v1beta1
 kind: DaemonSet
 metadata:
-  creationTimestamp: 2018-01-27T07:27:28Z
-  generation: 2
   labels:
     k8s-app: kube-proxy
-  name: kube-proxy
+  name: kube-proxy-arm
   namespace: kube-system
-  resourceVersion: "1662"
-  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/daemonsets/kube-proxy
-  uid: 8769e0b3-0333-11e8-8cb9-40a8f02df8cb
 spec:
   revisionHistoryLimit: 10
   selector:
@@ -29,7 +20,7 @@
       - command:
         - /usr/local/bin/kube-proxy
         - --config=/var/lib/kube-proxy/config.conf
-        image: gcr.io/google_containers/kube-proxy-amd64:v1.18.8
+        image: gcr.io/google_containers/kube-proxy-arm:v1.18.8
         imagePullPolicy: IfNotPresent
         name: kube-proxy
         resources: {}
@@ -48,7 +39,7 @@
       dnsPolicy: ClusterFirst
       hostNetwork: true
       nodeSelector:
-        kubernetes.io/arch: amd64
+        kubernetes.io/arch: arm
       restartPolicy: Always
       schedulerName: default-scheduler
       securityContext: {}
@@ -79,11 +70,3 @@
     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
-status:
-  currentNumberScheduled: 1
-  desiredNumberScheduled: 1
-  numberAvailable: 1
-  numberMisscheduled: 0
-  numberReady: 1
-  observedGeneration: 2
-  updatedNumberScheduled: 1

Now, you can create the new DaemonSet by running

kubectl create -f daemonset-arm.yaml

Finally, bring up the other nodes by running the kubeadm join ... command printed out during the initialization phase.

You should soon see everything up and running (I have an amd64 master and 3 arm nodes in this example):

NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                       1/1       Running   0          54m
kube-system   kube-apiserver-master             1/1       Running   0          54m
kube-system   kube-controller-manager-master    1/1       Running   0          54m
kube-system   kube-dns-6f4fd4bdf-n7tgn          3/3       Running   0          55m
kube-system   kube-proxy-arm-8nggz              1/1       Running   0          31m
kube-system   kube-proxy-arm-9vxn8              1/1       Running   0          31m
kube-system   kube-proxy-arm-h48nx              1/1       Running   0          31m
kube-system   kube-proxy-arm-hvdxw              1/1       Running   0          31m
kube-system   kube-proxy-dw5nw                  1/1       Running   0          36m
kube-system   kube-scheduler-master             1/1       Running   0          54m
kube-system   weave-net-frjln                   2/2       Running   3          31m
kube-system   weave-net-qgw9s                   2/2       Running   0          53m
kube-system   weave-net-vmj5p                   2/2       Running   3          31m
kube-system   weave-net-xg766                   2/2       Running   3          31m
kube-system   weave-net-xh54m                   2/2       Running   3          31m

Success!

@buptliuwei
Copy link

Thanks, I solved it

@rbehravesh
Copy link

rbehravesh commented May 15, 2018

@squidpickles Where can I add line daemonset-amd64.yaml to the file. For me there is no such a text at the file to edit. Also when I add it as a new line, it faces with an error in saving the file. Also I cannot find the file to change its name!

@mitchellcmurphy
Copy link

This is amazing. Thank you so much. Worked like a charm without issue. A medal should be awarded.

@rbehravesh I found that the first edit isn't needed anymore, but you still need to copy the file.

@alomsimoy
Copy link

Amazing guide!
Just a question, can it be done the other way? I mean, setting an arm device as a master (cheaper) and x86 as nodes (more powerful to run containers, and x86 container compatibility)?

@lucas-dclrcq
Copy link

Amazing guide!
Just a question, can it be done the other way? I mean, setting an arm device as a master (cheaper) and x86 as nodes (more powerful to run containers, and x86 container compatibility)?

Yes it can totally be done the other way. If you have matching daemon sets for all your architectures, it does not matter if your master is arm or x86.

@JackAllTrades-MoN
Copy link

Thank you so much. This was really helpful for me.
Anyway, I think the key "beta.kubernetes.io/arch" has already been deprecated,
but it worked on my env with a key "kubernetes.io/arch" instead.

@squidpickles
Copy link
Author

Glad it's been helpful. I updated the gist with the new label (and updated kube-proxy versions, just in case someone was copy/pasting)

@HsimWong
Copy link

HsimWong commented Jul 21, 2021

Thanks for the tutorial, but I have encountered some problem.
I am having an arm64 tx2 board connected with ethernet, but the weave-net stucks at 0/2 Init:CrashLoopBackOff.
I simply copied/pasted all lines but changed arm to arm64 at nodeSelector: kubernetes.io/arch:, since I found the cluster is unable to deploy kube-proxy-pod if I don't change.

root@ubuntu:~/cola-device-plugin# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS                  RESTARTS   AGE
kube-system   coredns-558bd4d5db-8fcmf         1/1     Running                 0          66m
kube-system   coredns-558bd4d5db-qq7gb         1/1     Running                 0          66m
kube-system   etcd-ubuntu                      1/1     Running                 0          67m
kube-system   kube-apiserver-ubuntu            1/1     Running                 0          67m
kube-system   kube-controller-manager-ubuntu   1/1     Running                 0          67m
kube-system   kube-proxy-5w4b5                 1/1     Running                 0          42m
kube-system   kube-proxy-arm-4pffj             1/1     Running                 0          8m29s
kube-system   kube-proxy-ffkpc                 1/1     Running                 0          42m
kube-system   kube-scheduler-ubuntu            1/1     Running                 0          67m
kube-system   weave-net-84kzv                  0/2     Init:CrashLoopBackOff   6          8m29s
kube-system   weave-net-cd4jt                  2/2     Running                 0          45m
kube-system   weave-net-ddd2f                  2/2     Running                 1          66m

The log of problem pod is shown below

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  15m                   default-scheduler  Successfully assigned kube-system/weave-net-84kzv to nvidia-desktop
  Normal   Pulling    15m                   kubelet            Pulling image "docker.io/weaveworks/weave-kube:2.8.1"
  Normal   Pulling    14m                   kubelet            Pulling image "docker.io/weaveworks/weave-kube:2.8.1"
  Normal   Pulled     14m                   kubelet            Successfully pulled image "docker.io/weaveworks/weave-kube:2.8.1" in 9.883527744s
  Normal   Created    13m (x5 over 14m)     kubelet            Created container weave-init
  Normal   Started    13m (x5 over 14m)     kubelet            Started container weave-init
  Normal   Pulled     13m (x4 over 14m)     kubelet            Container image "docker.io/weaveworks/weave-kube:2.8.1" already present on machine
  Warning  BackOff    4m35s (x46 over 14m)  kubelet            Back-off restarting failed container

by following your steps, I tried to solve the weave problem likewise,

  1. added nodeSelector
  2. Changed correspondent images to arm64 version
    But the problem still exist
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/arch=arm64
Tolerations:                 :NoSchedule op=Exists
                             :NoExecute op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  3m21s                 default-scheduler  Successfully assigned kube-system/weave-net-arm-pxccn to nvidia-desktop
  Normal   Pulling    3m20s                 kubelet            Pulling image "docker.io/weaveworks/weave-kube-arm64:2.8.1"
  Normal   Pulled     3m17s                 kubelet            Successfully pulled image "docker.io/weaveworks/weave-kube-arm64:2.8.1" in 3.178248287s
  Normal   Created    105s (x5 over 3m17s)  kubelet            Created container weave-init
  Normal   Started    105s (x5 over 3m16s)  kubelet            Started container weave-init
  Normal   Pulled     105s (x4 over 3m16s)  kubelet            Container image "docker.io/weaveworks/weave-kube-arm64:2.8.1" already present on machine
  Warning  BackOff    104s (x9 over 3m15s)  kubelet            Back-off restarting failed container

I currently have no ideas to push on. I have also tried flannel and calico, but did not work.
Do you have any ideas to configure?
Please help me.

@squidpickles
Copy link
Author

@HsimWong, it will help to see logs for the crashed container.

You can view logs for that particular container:

kubectl -n kube-system logs weave-net-84kzv -c weave-init

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment