Skip to content

Instantly share code, notes, and snippets.

@rkaramandi
Last active April 6, 2024 05:23
Show Gist options
  • Star 47 You must be signed in to star a gist
  • Fork 37 You must be signed in to fork a gist
  • Save rkaramandi/44c7cea91501e735ea99e356e9ae7883 to your computer and use it in GitHub Desktop.
Save rkaramandi/44c7cea91501e735ea99e356e9ae7883 to your computer and use it in GitHub Desktop.
Installing Kubernetes with the Flannel Network Plugin on CentOS 7

Install Prerequisites on ALL (Worker and Master) Nodes

Let's remove any old versions of Docker if they exist:

sudo yum remove docker \
                  docker-common \
                  docker-selinux \
                  docker-engine

And let's (re)install a fresh copy of Docker:

yum install docker

We will also need to install the latest release of kubectl, which is used to control Kubernetes. The instructions are straight from https://kubernetes.io/docs/tasks/tools/install-kubectl/

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

Make it executable:

chmod +x ./kubectl

And pop it into the PATH:

sudo mv ./kubectl /usr/local/bin/kubectl

Install kubelet and kubeadm on ALL (Worker and Master) Nodes

This is straight from https://kubernetes.io/docs/setup/independent/install-kubeadm/

sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo setenforce 0
sudo yum install -y kubelet kubeadm
sudo systemctl enable kubelet && systemctl start kubelet

Typically, you would need to install the CNI packages, but they're already installed in this case.

Configure Kubernetes Master

On the master node, we want to run:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The --pod-network-cidr=10.244.0.0/16 option is a requirement for Flannel - don't change that network address!

Save the command it gives you to join nodes in the cluster, but we don't want to do that just yet. You should see a message like

You can now join any number of machines by running the following on each node as root:

  kubeadm join --token <token> <IP>:6443

Start the cluster as a normal user. This part, I realized, was pretty important as it doesn't like to play well when you do it as root.

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

Install Flannel for the Pod Network (On Master Node)

We need to install the pod network before the cluster can come up. As such we want to install the latest yaml file that flannel provides. Most installations will use the following:

kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

At this point, give it a minute, and have a look at the status of the cluster. run kubectl get pods --all-namespaces and see what it comes back with. If everything shows running, then you're in business! Otherwise, if you notice errors like:

NAMESPACE     NAME                                                    READY     STATUS              RESTARTS   AGE
...
kube-system   kube-flannel-ds-knq4b                                   1/2       Error               5          3m
...

or

NAMESPACE     NAME                                                    READY     STATUS              RESTARTS   AGE
...
kube-system   kube-flannel-ds-knq4b                                   1/2       CrashLoopBackOff    5          5m
...

If this is the case, you will need to run the RBAC module as well:

kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml

Add Worker Nodes to the Cluster

Upto this point, we haven't really touched the worker nodes (other than installing the prerequisites), but now you can join the worker nodes by running the command that was given to us when we created the cluster

sudo kubeadm join --token <token> <ip>:6443

We'll see more services spinning up on other services:

kubectl get pods --all-namespaces

NAMESPACE     NAME                                                    READY     STATUS    RESTARTS   AGE
...
kube-system   kube-flannel-ds-fldtn                                   0/2       Pending   0          3s
...
kube-system   kube-proxy-c8s32                                        0/1       Pending   0          3s

And to confirm, when we do a kubectl get nodes, we should see something like:

NAME                            STATUS    AGE       VERSION
server1                         Ready     46m       v1.7.0
server2                         Ready     3m        v1.7.0
server3                         Ready     2m        v1.7.0

Running Workloads on the Master Node

By default, no workloads will run on the master node. You usually want this in a production environment. In my case, since I'm using it for development and testing, I want to allow containers to run on the master node as well. This is done by a process called "tainting" the host.

On the master, we can run the command kubectl taint nodes --all node-role.kubernetes.io/master- and allow the master to run workloads as well.

@JasonChen233
Copy link

JasonChen233 commented Apr 5, 2022

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml is 404
I found it on https://github.com/flannel-io/flannel/blob/master/Documentation/k8s-manifests/kube-flannel-rbac.yml by google
but when I exec kubectl create -f , I get

Error from server (AlreadyExists): error when creating "k8s/flannel/kube-flannel-rbca.yaml": clusterroles.rbac.authorization.k8s.io "flannel" already exists
Error from server (AlreadyExists): error when creating "k8s/flannel/kube-flannel-rbca.yaml": clusterrolebindings.rbac.authorization.k8s.io "flannel" already exists

I also try kubectl apply -f, but the pod status is stll CrashLoopBackOff

how can I fix that?


I find it is due to the cidr config is missing, and I fix it with the cidr!!!

@marco74
Copy link

marco74 commented May 25, 2022

On Debian Buster I can get running a stack of Kubernetes 1.24 with containerd. On Debian Bullseye I get an error message that the master refuses connections on :6443. Do I have something to prepare on Debian Bullseye?

@philippludwig
Copy link

Yes, this does not work on bullseye. I get:

failed to find plugin \\\"loopback\\\" in path [/usr/lib/cni]\"

@philippludwig
Copy link

For anyone else, you can install the containernetworking-plugins package to get this working.

@marco74
Copy link

marco74 commented Jun 9, 2022

Finally, I found a solution getting Kubernetes 1.24 running on Debian Bullseye. The master refuses connections on :6443 because the api-server was not up at that moment since several pods continuously got restarted, because the crashed. Finally I found a github issue (kubernetes/kubernetes#105762). From there I extracted the two lines with version and runtime_type. With that it worked properly. I previously opened another github issue (kubernetes/website#33795 (comment)). There you can read how I install everything now myself. You can also read and try https://wiki.pratznschutz.com/index.php?title=Kubernetes_1.24_auf_Debian_11 – this is German, if you understand it, feel free to use it.
Does the containernetworking-plugins include flannell package, which I installed using curl?

@philippludwig
Copy link

No, it just provides the loopback plugin.

@philippludwig
Copy link

So while the instructions of @marco74 work, two things are missing:

  • Disable swap everywhere
  • On the nodes, containernetworking-plugins is definitely required to be able to run any pods.

@marco74
Copy link

marco74 commented Jul 4, 2022

@philippludwig Yes you're right. I did not mention to disable swap because I tried on VMs with no swap at all, so I do not have to disable them.

But I never installed the containernetworking-plugins – running a pod is definitely possible, It's also possible to let pods communicate with each others. I deleted and applied the same yaml a lot of times it always works. I have to admit: All pods are single-container-pods. However, I will expand this project to multi-container-pods anyway, then I will see if containernetworking-plugins is really needed.

@zbinkz
Copy link

zbinkz commented Feb 23, 2023

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml is 404 I found it on https://github.com/flannel-io/flannel/blob/master/Documentation/k8s-manifests/kube-flannel-rbac.yml by google but when I exec kubectl create -f , I get

Error from server (AlreadyExists): error when creating "k8s/flannel/kube-flannel-rbca.yaml": clusterroles.rbac.authorization.k8s.io "flannel" already exists Error from server (AlreadyExists): error when creating "k8s/flannel/kube-flannel-rbca.yaml": clusterrolebindings.rbac.authorization.k8s.io "flannel" already exists

I also try kubectl apply -f, but the pod status is stll CrashLoopBackOff

how can I fix that?

I find it is due to the cidr config is missing, and I fix it with the cidr!!!

@JasonChen233 I'm having the same problem. Eventually managed to load a flannel yml including rbac config, but my kube-flannel pod status turns from Running to CrashLoopBackOff after a few seconds of launching it. How did you fix your CIDR config?

@namma14
Copy link

namma14 commented Apr 26, 2023

While running RBAC module to install flannel on master: RBAC module link is not working
error: unable to read URL "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml", server reported 404 Not Found, status code=404

@kumarankit999
Copy link

It is not working yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment