Skip to content

Instantly share code, notes, and snippets.

@emresaglam
Last active January 13, 2022 18:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save emresaglam/a972d49a058eb7759264115658b1f791 to your computer and use it in GitHub Desktop.
Save emresaglam/a972d49a058eb7759264115658b1f791 to your computer and use it in GitHub Desktop.
Kubernetes installation notes.

This is a fresh install to an Intel based computer. No VM, nothing. The goal is to have a single node kubernetes thing.

The Struggle and the installation

# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.3 LTS
Release:	20.04
Codename:	focal
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
  • docker ps didn't show any running docker instances. docker ps -a didn't show any stopped instances.
  • kubeadm reset and then kubeadm init didn't do anything new. Still have the same issues.
  • I did swapoff -a it didn't help.
  • Reboot didn't help
  • I found that you need to change your Docker cgroup driver to systemd. And finally it worked. Here are the steps:
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

systemctl daemon-reload
systemctl restart docker

You can check the docker cgroup driver by issuing docker info

~# docker info
...
 Cgroup Driver: systemd
 Cgroup Version: 1
...
  • Once docker restarts it will pick the kubernetes containers up and run.
  • I started having The connection to the server localhost:8080 was refused - did you specify the right host or port? errors. In order to fix it you need to have KUBECONFIG env variable set.
    • For root: export KUBECONFIG=/etc/kubernetes/admin.conf
    • For regular user:
     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config
     export KUBECONFIG=$HOME/.kube/config
    
    • Also add KUBECONFIG env variable to your shell profile.

Useful Notes

  • I use these commands to clean up k8s installation
~# kubeadm reset -f
~# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment