Skip to content

Instantly share code, notes, and snippets.

@madalinignisca
Last active June 19, 2024 22:46
Show Gist options
  • Save madalinignisca/0c33cf350db417aef67a5d45c1d9daf1 to your computer and use it in GitHub Desktop.
Save madalinignisca/0c33cf350db417aef67a5d45c1d9daf1 to your computer and use it in GitHub Desktop.
Kubernetes 1.25 on Fedora 37 @ Hetzner the right way

Kubernetes 1.25 on Fedora 37 @ Hetzner the right way

Using only kubeadm and helm from Fedora's repository Instructions are for Fedora 37+ only, using distribution's repository for installing Kubernetes.

Note: avoid kubernetes and kubernetes-master as there are deprecated and setup is more complex.

Note: I have not yet tinkered using ipv6, but it is in my plans to do it.

The result should be a decent Kubernetes cluster, with Cloud Control Manager installed and controlling layer 2 private networking allowing pods to talk at fastest speed possible. Will default to Hetzner's cloud volumes.

The setup has been tested with most of Bitnami's Helm recipes without issues.

Not recommended for people interested in having the official upstream binaries of Kubernetes. Fedora's version is more tested with Selinux active and you can observe selinux logs as it defaults to permissive to understand everything you will need to do for selinux to enforce it later

Create a 10.0.0.0/8 network, keep the defaults

Create one load balancer and one cx11/cpx11 server to use as jumphost.

Disable public network on the loadbalancer to be more safe. We are going to use the jumphost to run kubectl/helm against at the end.

Create and apply for all kubernetes nodes a simple firewall with no rules, so it will block all access from outside. I call it KISS security ;-)

Create 3 servers cpx11 and 2 minimum cpx11 (use any 2+ vcpu / 2+ gb ram servers for working nodes). I am setting from start 4 working nodes, as many HA clustered services require even 3+ nodes and I want all spread on server to maximize performance also.

Create the servers in the order, 1 by one, if not Hetzner will create them concurent and different ip addresses will end up for the private network. Try to keep simple to be able to follow along. Later using Ansible to automate all and get new clusters running in a few minutes, after creating the infrastructure with Terraform, this will not be a problem.

! Make sure to name them with with a simple hostname, example fsn1-k8s-cp-1 and use that for /etc/hosts !

Setup hosts using the private ip addresses

10.0.0.2 k8s-cp-lb
10.0.0.4 cp1
10.0.0.5 cp2
10.0.0.6 cp3
10.0.0.7 n1
10.0.0.8 n2
10.0.0.9 n3
10.0.0.10 n4

Add them to /etc/hosts and remove the localhost entry for each hostname of the node you edit, including the ipv6 local entry. That will confuse some services.

Update OS and install kubernetes from the distro:

dnf update
dnf install kubernetes-kubeadm \
  kubernetes-client \
  cri-o \
  helm \
  ebtables-legacy \
  iptables-legacy \
  iproute-tc \
  ethtool

Note: Ignore *legacy packages, our setup will not use ip/ebtables and but this will keep off kubeadm's warnings for missing tools and avoid any issues, although should not be. If you choose to use kube-proxy in normal mode, it would be needed although.

Run dnf remove zram-generator to ensure the zram service preinstalled on Hetzner Fedora images is not present. Why would anyone use swap???

Enable br_netfilter module: echo "br_netfilter" > /etc/modules-load.d/kubernetes.conf

Enable ip_forward: echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/99-kubernetes.conf

Optionally disable selinux, but on recent kubernetes and cri-o container runtime should be fine. I keep it enabled as there is support for some extra security things. Do the research and go beyound the lazy crowd ;-)

Note: Hetzner's images might be with selinux set in permissive mode. You can keep it like this and pay good attention to logs to learn what you need to do to have it in enforced mode.

Use nano to create and edit /etc/systemd/system/kubelet.service.d/20-hcloud.conf. Add on each node:

[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"

Note: I usually create a snapshot image and start all the new nodes from it. When I must prepare for an upgrade or do some changes, I ensure to do an update to this snapshot replicating changes on it from a fresh started server from it and creating a new snapshot. I keep last 2-3 of them just in case I notice something breaking on last update.

Reboot. Repeat each node.

Setup first control plane. The advertise parameter is important, matching the private ip to ensure that the cluster will communicate using the private network.

Optionally put in ~/.bash_profile export KUBECONFIG=/etc/kubernetes/admin.conf

Within next instructions, make sure you will use the correct private ip you need!

Create a file kubeadm.conf with the contents:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.25.4
networking:
  podSubnet: "10.244.0.0/16"
apiServer:
  extraArgs:
    authorization-mode: "Node,RBAC"
controllerManager:
  extraArgs:
    bind-address: "0.0.0.0"
scheduler:
  extraArgs:
    bind-address: "0.0.0.0"
controlPlaneEndpoint: "k8s-cp-lb:6443"

Note: I'm changing some defaults to be able to use Prometheus later to collect also all metrics from control panel nodes.

kubeadm init \
  --upload-certs \
  --config kubeadm.conf \
  --apiserver-advertise-address "10.0.0.4" \
  --apiserver-cert-extra-sans "cp2,cp3,10.0.0.5,10.0.0.6" \
  --skip-phases=addon/kube-proxy

You may want or not to use Cillium's kube-proxy replacement. It is slightly more performant

Join the other control panels using the given credentials.

kubeadm join fsn1-k8s-cp-lb:6443 --token replacetoken \
        --discovery-token-ca-cert-hash replacethehash \
        --control-plane --certificate-key replacethekey --apiserver-advertise-address "10.0.0.X"

Join the workers

kubeadm join fsn1-k8s-cp-lb:6443 --token replacetoken \
        --discovery-token-ca-cert-hash replacethehash

Add Hetzner token for the project you setup the cluster

kubectl -n kube-system create secret generic hcloud --from-literal=token=getyourowntoken --from-literal=network=nameofyournetwork

Add Hetzner CCM with networks

kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/v1.13.2/deploy/ccm-networks.yaml

Setup Networking with Cilium

helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --version 1.12.4 --namespace kube-system \
	--set "tunnel=disabled" \
	--set "ipam.mode=kubernetes" \
	--set "ipv4NativeRoutingCIDR=10.244.0.0/16" \
	--set "k8s.requireIPv4PodCIDR=true" \
	--set "kubeProxyReplacement=strict" \
	--set "k8sServiceHost=eu-k8s-cp-lb" \
	--set "k8sServicePort=6443" \
	--set "loadBalancer.mode=dsr"

Add Hetzner CSI

kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.1.0/deploy/kubernetes/hcloud-csi.yml

Add cert-manager

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.10.1 \
  --set installCRDs=true

Add ingress-nginx. Make sure to use correct location and load balancer hostname!

helm repo add ingress-nginx 
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx --create-namespace \
  --set-string controller.service.annotations."load-balancer\.hetzner\.cloud/location"="fsn1" \
  --set-string controller.service.annotations."load-balancer\.hetzner\.cloud/use-private-ip"="true" \
  --set controller.config.use-proxy-protocol=true \
  --set-string controller.service.annotations."load-balancer\.hetzner\.cloud/uses-proxyprotocol"="true" \
  --set controller.watchIngressWithoutClass=true \
  --set-string controller.service.annotations."load-balancer\.hetzner\.cloud/hostname"="lb.eu.nosweat.cloud"

Ensure to have A and AAAA records added to your DNS after you can get the given IP v4 and v6

Add Prometheus. I'm still looking to get the official helm chart working with this, but I found bitnami's chart to just work. This is still a bit of work in progress for me too.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install --namespace monitoring --create-namespace kube-prometheus bitnami/kube-prometheus --set "prometheus.persistence.enabled=true" --set "prometheus.persistence.size=10Gi" --set "alertmanager.persistence.enabled=true" --set "alertmanager.persistence.size=10Gi"

Update ingress-nginx for monitoring

helm upgrade ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --reuse-values --set controller.metrics.enabled=true --set controller.metrics.serviceMonitor.enabled=true --set controller.metrics.serviceMonitor.additionalLabels.release="prometheus" --set controller.metrics.serviceMonitor.namespace=monitoring --set controller.metrics.serviceMonitor.namespaceSelector.any=true

Grafana operator allows you to define Dashboards, users and settings by configmaps. Study, learn, be better then the others ;-)

Install Grafana and again, change the values for hostname and tlsSecret!

helm install grafana-operator bitnami/grafana-operator --namespace monitoring --set operator.prometheus.serviceMonitor.enabled=true --set operator.prometheus.serviceMonitor.namespace=monitoring --set grafana.ingress.enabled=true --set grafana.ingress.ingressClassName=nginx --set grafana.ingress.hostname=grafana.eu.nosweat.cloud --set-string grafana.ingress.annotations."cert-manager\.io/cluster-issuer"="letsencrypt-prod" --set grafana.ingress.tls=true --set grafana.ingress.tlsSecret="grafana.eu.nosweat.cloud-tls"

Get grafana password echo "Password: $(kubectl get secret grafana-admin-credentials --namespace monitoring -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"

Now go and deploy your awesome applications, don't forget to add prometheus exports. Come back from time to time for updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment