Skip to content

Instantly share code, notes, and snippets.

@bgulla
Last active June 17, 2020 16:35
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bgulla/e4d9eb58aa240c08e23775805dc89ba0 to your computer and use it in GitHub Desktop.
Save bgulla/e4d9eb58aa240c08e23775805dc89ba0 to your computer and use it in GitHub Desktop.
Rancher Install Guide for n00bs

Greenfield Rancher install

Pre-Rancher Steps

Kubernetes Persistent Storage

Head to your synology/freenas and setup a new NFS share just for your kubernetes persistent storage. Make sure to turn off authentication.

Networking - Create a vlan just for your VMs

  1. Create a vlan (10.0.X.1 or 192.168.X.1) just for your VMs in your Unifi Controller, dont worry about firewall or traffic shaping. What I typically do is once I have created the vlan, I go to the Unifi Switch configuration area, and assign the port that the server is being plugged-into as being mapped to my VM-VLAN.
  2. Create a vlan for your Ingress Controller/Metallb. Just create the vlan and we'll worry about the rest later.

Networking - Setup Pi-Hole

Since we're going to be doing lots of fun dns things, having a robust dns solution makes things 100% easier. I recommend PiHole. While pi-hole was built to be an ad-blocking DNS solution, you can also make it work for local DNS entries just as easily.

Configuring Pi-Hole

I use the approach laid out by this guy. Basically whenever I add a new host, I append the IP & hostname to the file below, run the playbook then any server on my network knows that the hostname resolves to that IP.

templates/localnet.list.j2
# Brandon Local Network
# {{ ansible_managed }}

10.0.5.7        larkspur.lol larkspur
10.0.1.9 influxdb.lol influxdb

# Hosts
10.0.1.9 spinnaker.lol  spinnaker
10.0.5.4        k8s.lol k8s

# Kubernetes - Rancher
10.0.5.2        r210.lol
10.0.1.122      frylock.lol     frylock
10.0.5.3        lenovo.lol
10.0.5.7        lifeboat.lol
10.0.1.9        spinnaker.lol
10.0.1.9        gogs.lol
10.0.1.218      desk.lol        desk

PiHole Wildcard DNS

You really don't want to create a new DNS entry whenever you spin up a new service on your cluster. So wildcard DNS is where it's at. I have my own domain that I made up (.lol), you need to do the same, let's call it .phil.

If you create a wildcard dns for *.k8s.phil and point it to a host running your nginx loadbalancer (by default, all of your rancher hosts will), NGINX will know what service you are looking for and automatically reroute it. Below is my pihole config that sets up the wildcard DNS entry.

templates02-localnet.conf.j2
# Local network dnsmasq config
# {{ ansible_managed }}
addn-hosts=/etc/pihole/localnet.list
localise-queries
no-resolv

# Local wildcard
address=/k8s.lol/10.0.5.44 # This routes *.k8s.lol -> my rancher master running my nginx loadbalancer 

Post-installation Rancher Steps

Installing kubectl on your host

Hopefully you have a linux box on your network that you plan to do all kubernetes command line goodness from.

  1. Download kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
mkdir ~/.kube
touch ~/.kube/config
echo "paste in your kubeconfig at ~/.kube/config"
  1. Copy over your kubecfg. In order to talk to your k8s cluster, you need to copy over your kubecfg file. Head to your rancher ui, get to the screen with all the nice guages. In the top right corner, you will see 'kubeconfig File'. Click that, copy the text and save it to ~/.kube/config.

  2. test it by running kubectl get nodes -o wide. it should spit something out like:

bg@[~] > k get nodes -o wide
NAME    STATUS   ROLES                      AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
i7a     Ready    worker                     11d    v1.15.5   10.0.5.74     <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64       docker://19.3.4
r02     Ready    worker                     265d   v1.15.5   10.0.5.59     <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64       docker://18.9.2
r04     Ready    worker                     274d   v1.15.5   10.0.5.57     <none>        CentOS Linux 7 (Core)   3.10.0-957.5.1.el7.x86_64   docker://18.9.2
texas   Ready    controlplane,etcd,worker   274d   v1.15.5   10.0.5.44     <none>        CentOS Linux 7 (Core)   3.10.0-957.5.1.el7.x86_64   docker://18.9.2

Install Helm

Think of Helm as your App Store for Kubernetes. While Rancher automatically enables Helm for you on the server-side, you need to install the commandline tool in order to deploy things outside of the rancher ui.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
sleep 2
helm init

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'      
helm init --service-account tiller --upgrade

NFS Storage

Your pods are going to run all over your cluster, who knows what harddrive they will be running on. In order to have stateful apps that persist data across your cluster, you need a persistent storage volume. You already (hopefully) setup an NFS share above. Now we have to tell k8s to use it.

Installing the NFS Client Storage Provisioner

export NFS_IP=10.0.1.9
export NFS_SHARE_NAME="/volume1/rancher"
helm install --name nfs-client-provisioner --set nfs.server=${NFS_IP} --set nfs.path=${NFS_SHARE_NAME} stable/nfs-client-provisioner
sleep 5
# The following line will tell k8s that the default persistent storage should be your NFS share.
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Boom. Now you can deploy apps and the data will persist via restarts/scales/etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment