Skip to content

Instantly share code, notes, and snippets.

@bossjones
Last active June 18, 2020 04:00
Show Gist options
  • Save bossjones/cd1d5109dcfe23b81f1981b0903de917 to your computer and use it in GitHub Desktop.
Save bossjones/cd1d5109dcfe23b81f1981b0903de917 to your computer and use it in GitHub Desktop.
How to setup kubernetes vagrant
@bossjones
Copy link
Author

Kubeadm Ansible Playbook (kairen/kubeadm-ansible)

Build a Kubernetes cluster using Ansible with kubeadm. The goal is easily install a Kubernetes cluster on machines running:

  • Ubuntu 16.04
  • CentOS 7
  • Debian 9

System requirements:

  • Deployment environment must have Ansible 2.4.0+
  • Master and nodes must have passwordless SSH access

Usage

Add the system information gathered above into a file called hosts.ini. For example:

[master]
192.16.35.12

[node]
192.16.35.[10:11]

[kube-cluster:children]
master
node

Before continuing, edit group_vars/all.yml to your specified configuration.

For example, I choose to run flannel instead of calico, and thus:

# Network implementation('flannel', 'calico')
network: flannel

Note: Depending on your setup, you may need to modify cni_opts to an available network interface. By default, kubeadm-ansible uses eth1. Your default interface may be eth0.

After going through the setup, run the site.yaml playbook:

$ ansible-playbook site.yaml
...
==> master1: TASK [addon : Create Kubernetes dashboard deployment] **************************
==> master1: changed: [192.16.35.12 -> 192.16.35.12]
==> master1:
==> master1: PLAY RECAP *********************************************************************
==> master1: 192.16.35.10               : ok=18   changed=14   unreachable=0    failed=0
==> master1: 192.16.35.11               : ok=18   changed=14   unreachable=0    failed=0
==> master1: 192.16.35.12               : ok=34   changed=29   unreachable=0    failed=0

Download the admin.conf from the master node:

$ scp k8s@k8s-master:/etc/kubernetes/admin.conf .

Verify cluster is fully running using kubectl:

$ export KUBECONFIG=~/admin.conf
$ kubectl get node
NAME      STATUS    AGE       VERSION
master1   Ready     22m       v1.6.3
node1     Ready     20m       v1.6.3
node2     Ready     20m       v1.6.3

$ kubectl get po -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-master1                            1/1       Running   0          23m
...

Resetting the environment

Finally, reset all kubeadm installed state using reset-site.yaml playbook:

$ ansible-playbook reset-site.yaml

@bossjones
Copy link
Author

ReSearchITEng/kubeadm-playbook

Update Status of the project: Stable

kubeadm-playboook ansible project's code is on Github

kubeadm based all in one kubernetes cluster installation (and addons) using Ansible

Tested on for all Centos/RHEL 7.2+ (ideally 7.4/7.5) and Ubuntu 16.04 (both with overlay2 and automatic docker_setup).
Optionally, when docker_setup: True, this project will also setup the docker on the host if does not exist.

Targets/pros&cons

Kubeadm simplifies drastically the installation, so for BYO (vms,desktops,baremetal), complex projects like kubespray/kops are not required any longer.
This project targets to get a fully working environment in matter of minutes on any hw: baremetal, vms (vsphere, virtualbox), etc.
Major difference from other projects: it uses kubeadm for all activities, and kubernetes is running in containers.
The project is for those who want to create&recreate k8s cluster using the official method (kubeadm), with all production features:

  • Ingresses (via helm chart)
  • Persistent storage (ceph or vsphere)
  • dashboard (via helm chart)
  • heapster (via helm chart)
  • support proxy
  • modular, clean code, supporting multiple activies by using ansible tags (e.g. add/reset a subgroup of nodes).
  • suppoer multi master

PROS:

  • quick (3-7 min) full cluster installation
  • all in one shop for a cluster which you can start working right away, without mastering the details
  • applies fixes for quite few issues currently k8s installers have
  • deploys plugins to all creation of dynamical persistent volumes via: vsphere, rook or self deployed NFS
  • kubeadm is the only official tool specialized to install k8s

CONS:

  • during deployment requires internet access. Changes can be done to support situations when there is no internet. Should anyone be interested, I can give suggestions how (also see gluster project for hints).

Prerequisites:

  • ansible min. 2.3 (but higher is recommeneded. Tested on current 2.5)
  • For a perfect experience, one should at least define a wildcard dns subdomain, to easily access the ingresses. The wildcard can pointed to the master (as it's quaranteed to exists).
    Note: dashboard will by default use the master machine, but also deploy under the provided domain (in parallel, only additional ingress rule)
  • if docker_setup is True, it will also attempt to define your docker and set it up with overlay2 storage driver (one needs CentOS 7.4+)
  • it will set required kernel modules (if desired)
  • if one needs ceph(rook) persistent storage, disks or folders should be prepared and properly sized (e.g. /storage/rook)

This playbook will:

  • pre-sanity: docker sanity
  • kernel modules (load & setup for every restart)
  • Install ntp (to keep time in sync within cluster) (control via group_vars/all)
  • Install the kubeadm repo
  • Install kubeadm, kubelet, kubernetes-cni, and kubectl
  • If desired, manipulate SELinux setting (control via group_vars/all)
  • Set kubelet --cgroup-driver=systemd , swap-off, and many other settings required by kubelet to work (control via group_vars/all)
  • Reset activities (like kubeadm reset, unmount of /var/lib/kubelet/* mounts, ip link delete cbr0, cni0 , etc.) - important for reinstallations.
  • Initialize the cluster on master with kubeadm init
  • Install user specified pod network from group_vars/all (flannel, calico, weave, etc)
  • Join the nodes to the cluster with 'kubeadm join' and full set of params.
  • Install helm
  • Install nginx ingress controller via helm (control via group_vars/all)
  • Install kubernetes dashboard (via helm)
  • Installs any listed helm charts in the config (via helm)
  • Installs any yaml listed in the config
  • Planned: Install prometheus via Helm (control via group_vars/all) -> prometheus operator helm chart is expected soon,
  • Sanity: checks if nodes are ready and if all pods are running, and provides details of the cluster.
  • when enabled, it will create ceph storage cluster using rook operator
  • when enabled, it will create vsphere persistent storage class and all required setup. Please fill in vcenter u/p/url,etc group_vars/all, and follow all initial steps there.
  • it will define a set of handy aliases

NOTE: It does support http_proxy configuration cases. Simply update the your proxy in the group_vars/all.
This has been tested with RHEL&CentOS 7.3-7.5 and Ubuntu 16.04 and Kubernetes v1.6.1 - v1.11.3
In general, keep the kube* tools at the same minor version with the desired k8s cluster. (e.g. For installing k8s v1.7 one must also use kubeadm 1.7 (kubeadm limitation).)
FYI, higher kube* are usually supported with 1 minor version older cluster (e.g. kube[adm/ctl/let] 1.8.* accepts kubernetes cluster 1.7.*).

If for any reason anyone needs to relax RBAC, they can do:
kubectl create -f https://github.com/ReSearchITEng/kubeadm-playbook/blob/master/allow-all-all-rbac.yml

How To Use:

Full cluster installation

git clone https://github.com/ReSearchITEng/kubeadm-playbook.git
cd kubeadm-playbook/
cp hosts.example hosts
vi hosts <add hosts>
# Setul vars in group_vars
vi group_vars/all/* <modify vars as needed>
ansible-playbook -i hosts site.yml [--skip-tags "docker,prepull_images,kubelet"]

If there are any issues, you may want to run only some of the steps, by choosing the appropriate tags to run.
Read the site.yml. Here are also some explanations of important steps:

  • reset any previous cluster, delete etcd, cleanup network, etc. (role/tag: reset)
  • common section which prepares all machines (e.g. docker if required, kernel modules, etc) (role: common)
  • install etcd (role/tag: etcd) (requried only when you have HA only)
  • install master (role/tag: master)
  • install nodes (role/tag: node)
  • install network, helm, ingresses, (role/tag: post_deploy)

Add manage (add/reinstall) only one node (or set of nodes):

  • modify inventory (hosts file), and leave the master intact, but for nodes, keep ONLY the nodes to be managed (added/reset)
  • ansible-playbook -i hosts site.yml --tags node

To remove a specific node (drain and afterwards kube reset, etc)

  • modify inventory (hosts file), and leave the master intact, but for nodes, keep ONLY the nodes to be removed
  • ansible-playbook -i hosts site.yml --tags node_reset

Other activities possible:

There are other operations possible against the cluster, look at the file: site.yml and decide. Few more examples of useful tags:

  • "--tags reset" -> which resets the cluster in a safe matter (first removes all helm chars, then cleans all PVs/NFS, drains nodes, etc.)
  • "--tags helm_reset" -> which removes all helm charts, and resets the helm.
  • "--tags cluster_sanity" -> which does, of course, cluster_sanity and prints cluster details (no changes performed)

Check the installation of dashboard

The output should have already presented the required info (or run again: ansible-playbook -i hosts site.yml --tags cluster_sanity).
The Dashboard is set on the master host, and, additionally, if it was set, also at something like: http://dashboard.cloud.corp.example.com (depending on the configured selected domain entry), and if the wildcard DNS was properly set up *.k8s.cloud.corp.example.com pointing to master machine public IP).

e.g. curl -SLk 'http://k8s-master.example.com/#!/overview?namespace=_all' | grep browsehappy

For testing the Persistent volume, one may use/tune the files in the demo folder.

kubectl exec -it demo-pod -- bash -c "echo Hello TEST >> /usr/share/nginx/html/index.html "

and check the http://pv.cloud.corp.example.com page.

load-ballancing

For LB, one may want to check also:

DEMO:

Installation demo k8s 1.7.8 on CentOS 7.4: kubeadm ansible playbook install demo asciinema video

Vagrant

For using vagrant on one or multiple machines with bridged interface (public_network and ports accessible) all machines must have 1st interface as the bridged interface (so k8s processes will bind automatically to it). For this, use this script: vagrant_bridged_demo.sh.

Steps to start Vagrant deployment:

  1. edit ./Vagrant file and set desired number of machines, sizing, etc.
  2. run:
./vagrant_bridged_demo.sh --full [ --bridged_adapter <desired host interface|auto>  ] # bridged_adapter defaults to ip route | grep default | head -1 

After preparations (edit group_vars/all, etc.), run the ansible installation normally.

Using vagrant keeping NAT as 1st interface (usually with only one machine) was not tested and the Vagrantfile may requires some changes.
There was no focus on this option as it's more complicated to use afterwards: one must export the ports manually to access ingresses like dashboard from the browser, and usually does not support more than one machine.

kubeadm-ha

While Kubeadm does not make multimaster (aka HA) setup easy (yet), thanks the comunity there we have it!
Starting our playbook for v1.11, we support master HA !
Kubeadm will support ha OOB later -> as per kubernetes/kubeadm#546; For now we do it using some work-arounds.
Our HA work is based on projects like: https://github.com/mbert/kubeadm2ha ( and https://github.com/sv01a/ansible-kubeadm-ha-cluster and/or github.com/cookeem/kubeadm-ha ).

How does it compare to other projects:

Kubeadm -> the official k8s installer (yet to be GA).

With kubeadm-playbook we are focus only kubeadm.
Pros:

  • as it's the official k8s installation tool
  • kubeadm is released with every k8s release, and you have a guarantee to be in sync with the official code.
  • self hosted deployment, making upgrades very smooth ; Here is a KubeCon talk presenting even more reasons to go with self-hosted k8s: https://www.youtube.com/watch?v=jIZ8NaR7msI

Cons:

  • currenlty in beta (to be GA expected soon)
  • no HA yet (expected in next release v1.10)

Other k8s installers

Similar k8s install on physical/vagrant/vms (byo - on premises) projects you may want to check, but all below are without kubeadm (as opposed to this project)

PRs are accepted and welcome.

PS: work inspired from: @sjenning - and the master ha part from @mbert. PRs & suggestions from: @carlosedp - Thanks.
URL page of kubeadm-playboook ansible project
kubeadm-playboook ansible project's code is on Github

License: Public Domain

@bossjones
Copy link
Author

Issue w/ fannel and 1.12.3 kubernetes - flannel-io/flannel#1044

@bossjones
Copy link
Author

@bossjones
Copy link
Author

@bossjones
Copy link
Author

YO THIS IS THE ONE !!!!!!!!!! OpenShift / K8s : DNS Configuration Explained

http://www.ksingh.co.in/blog/2017/10/04/openshift-dns-configuration-explained/

@bossjones
Copy link
Author

@bossjones
Copy link
Author

bossjones commented Dec 4, 2018

https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

The game.properties and ui.properties files in the configure-pod-container/configmap/kubectl/ directory are represented in the data section of the ConfigMap.

kubectl get configmaps game-config -o yaml
apiVersion: v1
data:
  game.properties: |
    enemies=aliens
    lives=3
    enemies.cheat=true
    enemies.cheat.level=noGoodRotten
    secret.code.passphrase=UUDDLRLRBABAS
    secret.code.allowed=true
    secret.code.lives=30
  ui.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=true
    how.nice.to.look=fairlyNice
kind: ConfigMap
metadata:
  creationTimestamp: 2016-02-18T18:52:05Z
  name: game-config
  namespace: default
  resourceVersion: "516"
  selfLink: /api/v1/namespaces/default/configmaps/game-config
  uid: b4952dc3-d670-11e5-8cd0-68f728db1985

kubeadm configmap

https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

Saves kubeadm MasterConfiguration in a ConfigMap for later reference
kubeadm saves the configuration passed to kubeadm init, either via flags or the config file, in a ConfigMap named kubeadm-config under kube-system namespace.

This will ensure that kubeadm actions executed in future (e.g kubeadm upgrade) will be able to determine the actual/current cluster state and make new decisions based on that data.

Please note that

Before uploading, sensitive information like e.g. the token are stripped from the configuration.
Upload of master configuration can be invoked individually with the kubeadm alpha phase upload-config command.
If you initialized your cluster using kubeadm v1.7.x or lower, you must create manually the master configuration ConfigMap before kubeadm upgrade to v1.8 . In order to facilitate this task, the kubeadm config upload (from-flags|from-file) was implemented.

@bossjones
Copy link
Author

The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.

See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration for kube proxy official documentation.

Here is a fully populated example of a single YAML file containing multiple configuration types to be used during a kubeadm init run.

https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
bootstrapTokens:
- token: "9a08jv.c0izixklcxtmnze7"
  description: "kubeadm bootstrap token"
  ttl: "24h"
- token: "783bde.3f89s0fje9f38fhf"
  description: "another bootstrap token"
  usages:
  - signing
  groups:
  - system:anonymous
nodeRegistration:
  name: "ec2-10-100-0-1"
  criSocket: "/var/run/dockershim.sock"
  taints:
  - key: "kubeadmNode"
    value: "master"
    effect: "NoSchedule"
  kubeletExtraArgs:
    cgroupDriver: "cgroupfs"
apiEndpoint:
  advertiseAddress: "10.100.0.1"
  bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
etcd:
  # one of local or external
  local:
    image: "k8s.gcr.io/etcd-amd64:3.2.18"
    dataDir: "/var/lib/etcd"
    extraArgs:
      listen-client-urls: "http://10.100.0.1:2379"
    serverCertSANs:
    -  "ec2-10-100-0-1.compute-1.amazonaws.com"
    peerCertSANs:
    - "10.100.0.1"
  external:
    endpoints:
    - "10.100.0.1:2379"
    - "10.100.0.2:2379"
    caFile: "/etcd/kubernetes/pki/etcd/etcd-ca.crt"
    certFile: "/etcd/kubernetes/pki/etcd/etcd.crt"
    certKey: "/etcd/kubernetes/pki/etcd/etcd.key"
networking:
  serviceSubnet: "10.96.0.0/12"
  podSubnet: "10.100.0.1/24"
  dnsDomain: "cluster.local"
kubernetesVersion: "v1.12.0"
controlPlaneEndpoint: "10.100.0.1:6443"
apiServerExtraArgs:
  authorization-mode: "Node,RBAC"
controllerManagerExtraArgs:
  node-cidr-mask-size: 20
schedulerExtraArgs:
  address: "10.100.0.1"
apiServerExtraVolumes:
- name: "some-volume"
  hostPath: "/etc/some-path"
  mountPath: "/etc/some-pod-path"
  writable: true
  pathType: File
controllerManagerExtraVolumes:
- name: "some-volume"
  hostPath: "/etc/some-path"
  mountPath: "/etc/some-pod-path"
  writable: true
  pathType: File
schedulerExtraVolumes:
- name: "some-volume"
  hostPath: "/etc/some-path"
  mountPath: "/etc/some-pod-path"
  writable: true
  pathType: File
apiServerCertSANs:
- "10.100.1.1"
- "ec2-10-100-0-1.compute-1.amazonaws.com"
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "k8s.gcr.io"
unifiedControlPlaneImage: "k8s.gcr.io/controlplane:v1.12.0"
auditPolicy:
  # https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#audit-policy
  path: "/var/log/audit/audit.json"
  logDir: "/var/log/audit"
  logMaxAge: 7 # in days
featureGates:
  selfhosting: false
clusterName: "example-cluster"

@bossjones
Copy link
Author

iptables and kubernetes

https://docs.oracle.com/cd/E52668_01/E88884/html/kube_admin_config_iptables.html

https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports

Never faced an issue, but it might be a good idea to delete also: br0 lbr0 tun0 vxlan0

docs.openshift.com/enterprise/3.1/architecture/additional_concepts/sdn.html

review if any of these can be used:

kubernetes/contrib:ansible/vagrant@master
thiagodasilva/kubernetes-swift:Vagrantfile@master
with pure vagrant and no bridge with external adaptor

@bossjones
Copy link
Author

[nfs-provisioner] Quota not working

@bossjones
Copy link
Author

Support Add-ons
Required

CoreDNS
Dashboard
Traefik
Optional

Heapster + InfluxDB + Grafana
ElasticSearch + Fluentd + Kibana
Istio service mesh
Helm
Vistio
Kiali

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

SOURCE: https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster/blob/e730b8a610cbf670ac7f5c6040be10d8c1dd0aad/README.md

https://istio.io/docs/concepts/what-is-istio/

https://github.com/Netflix/vizceral

https://github.com/Rev3rseSecurity/WebMap

https://github.com/fluent/fluentd-ui

@bossjones
Copy link
Author

bossjones commented Dec 7, 2018

SOURCE: https://kubedex.com/ingress/

nginx-ingress vs kong vs traefik vs haproxy vs voyager vs contour vs ambassador vs istio ingress

As far as I know this is the complete list of Ingresses available for Kubernetes. Technically ambassador isn’t an ingress but it acts like one which is good enough. As you can probably see I’ve made quite a large table comparing features.

For those who struggle with reading the image there’s a link to open the google sheet directly below. Feel free to leave comments and I’ll update this blog post with corrections.

View the full Google sheet here.

Update: Istio ingress and Citrix ingress controller are now included in the Google sheet.

Based on the features, my own experience and anecdotal blog evidence I’ll attempt to provide my usual unbiased opinion on each.

  1. ingress-nginx
    This is probably the most commonly installed ingress. Safe, boring and reliable. Supports http, https and does ssl termination. You can also get TCP and UDP working but from looking at the Github issues I think I’d try to avoid it. You get quite a few nice load balancing options as well as powerful routing, websocket support, basic authentication and tracing.

It’s quite common to use this ingress in conjunction with cert-manager for generating SSL certs and external-dns for updating cloud based DNS entries.

The lack of dynamic discovery is a bit of a downer. There is a config generator that you can use to automate this but apparently it’s terrible.

Note: There’s the official Kubernetes ingress which is what we’re talking about here. There’s also the Ingress from Nginx corp which has different settings.

  1. Kong
    Most people will use Kong when they want an API gateway. Kong includes a plugin system that extends the features to beyond what a normal Ingress would do. I wouldn’t use this as a generic http load balancer but if you want API management features then Kong is definitely a good choice.

At previous companies I’ve always put an ingress in front of Kong and routed /api/ requests to it. However, more recently the developers of Kong have been making a lot of progress turning Kong into an Ingress.

  1. Traefik
    This one surprised me with just how many features it has. The resiliency features look awesome and from reading a broad selection of tech blogs it seems quite stable. Supporting dynamic configurations is a big upgrade if you’re currently using ingress-nginx.

One downside is it only supports http, https and grpc. If you need TCP load balancing then you’ll need to choose something else.

Another consideration is minimizing server reloads because that impacts load balancing quality and existing connections etc. Traefik doesn’t support hitless reloads so you need NGINX or Envoy Proxy for this. For a lot of people this is a big deal.

  1. HAProxy
    This is the king of the ingresses when it comes to load balancing algorithms. It’s also the best choice for load balancing TCP connections. HAProxy has a track record of being extremely stable software. You can also get a paid support subscription if you want one.

  2. Voyager
    Another ingress based on HAProxy under the covers. Voyager is packaged up nicely and the docs look good. I couldn’t see where the load balancing algorithms were configured so assumed it’s just defaulting to round robin. If that’s wrong let me know in the comments and I’ll update.

  3. Contour
    Based on Envoy this has some more modern features like supporting Canary deploys. It also has a good set of load balancing algorithms and support for a variety of protocols. Unlike some of the others listed I got the impression from Github that this is under pretty rapid development still. There are discussions about adding more features which seems promising.

  4. Ambassador
    As mentioned above this one isn’t technically an ingress if you go by the strict Kubernetes definition. With Ambassador you simply annotate your services and it acts like an ingress by routing traffic. Ambassador has some very cool features that none of the other ingresses have like traffic shadowing which allows you to test services in a live production environment by mirroring request data.

Ambassador integrates nicely with both Opentracing and Istio.

  1. Istio Ingress
    If you’re already running Istio then this is probably a good default choice. It has some of the more modern features that Ambassador has. It also has fault injection which looks like it might be fun to play with. However, Istio is currently doing a lot of work in this area and is moving away from Ingress towards Gateways. So if you’re looking for something that’s not changing every 5 seconds you may want to still consider Ambassador.

Istio ingress also doesn’t support things like redirect from cleartext to TLS & authentication which are common features you want in your edge.

Summary
There’s no clear winner in this one because you’re going to need to pick the ingress based on your requirements. No single ingress currently does it all.

The safest choice is ingress-nginx. This is the one that most people use and it’s extremely reliable. The problem with ingresses is that when there’s a problem literally everyone complains. Ingress-nginx will cover 99% of use cases, so start here and then test others in a dev environment for a while before switching. Before you begin I’d recommend you read this blog to get ahead of some of the problems you may encounter.

My vote for the coolest ingress definitely goes to Ambassador. If you’re just running standard http based micro services and fancy living on the bleeding edge then you should definitely get Istio, Ambassador and Jaeger setup as a proof of concept.

@bossjones
Copy link
Author

KubeDNS Tweaks for Performance - https://rsmitty.github.io/KubeDNS-Tweaks/

consider running dnsmasq on nodes instead of kube-dns #32749 - kubernetes/kubernetes#32749

@bossjones
Copy link
Author

bossjones commented Dec 10, 2018

@bossjones
Copy link
Author

@bossjones
Copy link
Author

@bossjones
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment