Skip to content

Instantly share code, notes, and snippets.

@mowings
Last active April 6, 2020 20:01
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mowings/9a1eff2d504539a0064a28a8cba13386 to your computer and use it in GitHub Desktop.
Save mowings/9a1eff2d504539a0064a28a8cba13386 to your computer and use it in GitHub Desktop.
Add a new kubernetes node to existing cluster

Install the node software:

apt update && apt upgrade -y
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt install linux-image-extra-virtual ca-certificates curl software-properties-common -y

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
stable"

apt update
apt install docker-ce kubelet kubeadm kubectl kubernetes-cni -y # See note on versions below, though!!!

Be sure kubelet versions match on master and new node

Note that you have to match the version of the running cluster. This means you might need to install older stuff on the new node (apt gets you the latest by default.

To get the version running, go to the master:

kubelet --version  # You need to be running the same version on the worker

To get the available versions, on the worker, run:

curl -s https://packages.cloud.google.com/apt/dists/kubernetes-xenial/main/binary-amd64/Packages | grep Version | awk '{print $2}'

Then install the software with specific versions, instead of the defaults as above.

So assuming we need 1.14.1:

apt-get install -y kubelet=1.14.1-00 kubectl=1.14.1-00 kubeadm=1.14.1-00

Optional -- set up a config file for kubectl

Next, you should create a config file, if you intend to run kubectl on the node. From the master, copy /etc/kubernetes/admin.conf into $HOME/.kube/conf on the new node. Be sure that ownership is set to the kubernetes user and conf is read-only.

sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config | tee -a ~/.bashrc

Join the new node into the cluster

You now need to join the cluster. Go to the master, and get a new join token, allong with the actual command to run on the node to do the join. From the master:

sudo kubeadm token create --print-join-command

this will output a command line that you can execute on the new node to join the cluster. It will look something like this:

kubeadm join 10.132.35.160:6443 --token e824qk.lirl8yvvne9vuiag  --discovery-token-ca-cert-hash sha256:<blah blah>

run that command (using sudo) on the worker.

If you get an error that looks something like the following:

error execution phase kubelet-start: configmaps "kubelet-config-1.15" is forbidden: User "system:bootstrap:g0toug" cannot get resource "configmaps" in API group "" in the namespace "kube-system"

Then you are probably running a leter version (in this case 1.15) than the rest of the cluster. uninstall kubadm, kublet, and kubectl, remove /etc/kubernetes and rinstall those same packages with the correct versions per above.

Check nodes

You can now use kubectl get nodes to check the status of the cluster. Note that it can take a minute or so for the new node to join.

Digital Ocean Notes Installation Notes

It looks like the join failed to bind on the node private address. I had to pass --nodeIp <private ip> manually in /etc/systemd/system/kubelet.d. It may be possible to pass a flag into kubeadm above to get this right -- not sure. Just something to be aware of when you need to restrict which interface the kubeket gets bound to. (All the other nodes were correct, so I know I did this correctly initially).

Note that as is the case with docker, kubernetes manages it's own set of iptables rules, and can thus unexpectedly punch a hole in your firewall if you are relying on ufw or iptables directly. You should digital ocean's firewall product to secure your cluster, not ufw.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment