aka install
add the following to the config.yaml
from https://docs.rke2.io/install/network_options#using-multus
# /etc/rancher/rke2/config.yaml
cni:
- multus
- canal
to air gap pull rancher/hardened-multus-cni:v4.0.2-build20230811
validate with kubectl get pods -A | grep -i multus-ds
create NetworkAttachmentDefinition
for local network.
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216"
}
}'
EOF
run test pod
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
- name: samplepod
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
get network config from test pod
kubectl exec -it samplepod -- ip a
Good article : https://devopstales.github.io/kubernetes/multus/
DHCP anyone? Keep in mind that nohup /opt/cni/bin/dhcp daemon &
needs to be running on the control node for DHCP to be passing into the pod.
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-dhcp
spec:
config: '{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": { "type": "dhcp" }
}'
EOF
and
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: dhcp
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-dhcp
spec:
containers:
- name: dhcp
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
get ip kubectl exec -it dhcp -- ip a
and now ping it from an external device.
Or nginx
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-dhcp
spec:
containers:
- name: nginx
image: nginx
EOF
And we can check for the 192.168.1.0/24
address with kubectl describe pod nginx
@clemenko Thanks for the response. It was nothing that basic. When Cilium gets installed during RKE2 deployment, the default behavior is to act as an exclusive CNI in the Helm chart values. So when the cilium pod was starting up on the nodes, it mounted
/etc/cni/net.d/
and renamed all other CNI configs to make them ineligible so that there is no chance pod startups use another CNI. Long story short, it was renaming my00-multus.conf
to a .bak file and continued to do so if I tried to rename it back while the cilium pod was still running.I had to deploy a cilium HelmChartConfig manifest into the static folder on one of my nodes with...
...before it would stop. As soon as it stopped changing out the multus conf file, I was able to deploy ipvlan interfaces.