Skip to content

Instantly share code, notes, and snippets.

@mikejoh
Last active August 8, 2018 18:05
Show Gist options
  • Save mikejoh/68e1fcf58829026b89c2facfa0180114 to your computer and use it in GitHub Desktop.
Save mikejoh/68e1fcf58829026b89c2facfa0180114 to your computer and use it in GitHub Desktop.
Get started with a Kubernetes cluster on GCE using kubeadm

Quick start guide setting up a small k8s cluster in GCE

Notes

  • We'll be running Ubuntu 18.04 instances
  • On Ubuntu 18.04 and Kubeadm 1.11.1 you can experience problems with nodes not

Step-by-step

  1. Create one master node and one worker node. Warning for wall of script below:
echo "Creating a new project..."
gcloud projects create kubeadm-poc --name "Kubeadm PoC" --set-as-default

echo "Creating a new network..."
gcloud compute networks create k8s-cluster-network --subnet-mode custom

echo "Creating cluster network subnet used by the instances..."
gcloud compute networks subnets create k8s-cluster-network \
    --network k8s-cluster-network \
    --range 10.240.0.0/24

echo "Creating firewall rule that allows internal communication within the cluster..."
gcloud compute firewall-rules create k8s-cluster-allow-internal \
  --allow tcp,udp,icmp \
  --network k8s-cluster-network \
  --source-ranges 10.240.0.0/24,10.200.0.0/16

echo "Creating firewall rule that allows external communication to the cluster..."
gcloud compute firewall-rules create k8s-cluster-allow-external \
    --allow tcp:22,tcp:6443,icmp \
    --network k8s-cluster-network \
    --source-ranges 0.0.0.0/0

echo "Creating a static external IP address..."
gcloud compute addresses create k8s-cluster-external \
    --region $(gcloud config get-value compute/region)

echo "Creating worker node..."
gcloud compute instances create worker-0 \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --metadata pod-cidr=10.200.1.0/24 \
    --private-network-ip 10.240.0.10 \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet k8s-cluster-network \
    --tags k8s,worker

echo "Creating master node..."
gcloud compute instances create master-0 \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --private-network-ip 10.240.0.11 \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet k8s-cluster-network \
    --tags k8s,master

echo "Create routes for instances..."
gcloud compute routes create k8s-cluster-route-10-200-1-0-24 \
    --network k8s-cluster-network \
    --next-hop-address 10.240.0.10 \
    --destination-range 10.200.1.0/24
  1. SSH to the master node
gcloud compute ssh master-0
  1. First things first
apt update && apt -y upgrade
  1. Turn off swap (kubelet pre-requisite), make permanent if needed via /etc/fstab
swapoff -a
  1. Add changes to ufw and enable IP forwarding
vi /etc/ufw/sysctl.conf
ADD THE FOLLOWING:
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

echo 1 > /proc/sys/net/ipv4/ip_forward
  1. Reboot (?), might be needed...
  2. Install Docker via apt, kubeadm are tested on Docker 17.03 but via apt (as of writing this gist) we'll get version 17.12, everything still works though. Also enable the Docker service so that it survives reboots.
apt install -y docker.io
systemctl enable docker.service
  1. Install needed package
apt install -y apt-transport-https
  1. Add the Google Cloud apt packages gpg key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  1. Add the Kubernetes apt repo (yes it's Xenial still but packages still work on Bionic which is version 18.04 of Ubuntu)
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
  1. Run an update to fetch package lists from the Kubernetes repository
apt update
  1. Install kubectl, kubeadm and kubelet. Kubelet will crash immediatly, this will be sorted out soon when initializing the cluster with kubeadm
apt install -y kubelet kubeadm kubectl
  1. Initialize the cluster with the Pod CIDR configured when bootstrapping networks in GCE
kubeadm init --pod-network-cidr=10.200.1.0/24
  1. Configure kubectl, after the initialization completes you'll get the info needed to join nodes to the cluster and also how to connect to the cluster with kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Run the following to verify connectivity to the cluster
kubectl get nodes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment