Skip to content

Instantly share code, notes, and snippets.

@crocandr
Last active March 21, 2019 14:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save crocandr/04a8bafba824cf8fdc4663e498d5a6c5 to your computer and use it in GitHub Desktop.
Save crocandr/04a8bafba824cf8fdc4663e498d5a6c5 to your computer and use it in GitHub Desktop.
Kubernetes - Ubuntu Bionic

Kubernetes - Ubuntu Bionic

Kubernetes install

https://linuxconfig.org/how-to-install-kubernetes-on-ubuntu-18-04-bionic-beaver-linux https://computingforgeeks.com/how-to-setup-3-node-kubernetes-cluster-on-ubuntu-18-04-with-weave-net-cni/

Prerequsites

  • comment out / disable swap line in /etc/fstab
  • run swapoff -a command
  • set/add a unique name for every kubernetes node (set in /etc/hostname and /etc/hosts file) and do not forget to restart the server to apply
  • add 4 cpu core and 4 GByte ram minimum to every server

Install

run these commands on every node (as root):

curl -L -S get.docker.com | bash

usermod -a -G docker <username>

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

apt install kubeadm 

run on master as root:

kubeadm init --pod-network-cidr=10.244.0.0/16

copy the output that contains join command and run on the slave nodes as root

and run these commands:

su <username>
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Choose one of these network service:

su <username>
# flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# waeve
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get pods --all-namespaces

run on slave nodes as root:

example:

kubeadm join <master host IP>:6443 --token qdjnpd.5glu39uxr92xarsj --discovery-token-ca-cert-hash sha256:ed0684156c718caf425ceae6c85a56c05f7b49037cde3a2f1fd57430a4f58f89

Check on master:

kubectl get nodes

If you see the new node in the output, it works.

Dashboard

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

start proxy service for dashboard:

kubectl proxy &

Dashboard is a kubectl alternative. The Kubernetes doesn't have a persistent accessible Dashboard like Docker-Portainer or Kubernetes cluster that created by Rancher, etc. You can access the Dashboard on your computer that runs kubectl. After the kubectl proxy command run successfully, you can access to the Dashboard on http://127.0.0.1:8001/ui URL.

What is my service token?

kubectl -n kube-system get secret |grep  kubernetes-dashboard-token |cut -f1 -d ' ' |  xargs kubectl -n kube-system describe  secret

Use it to authenticate to the Dashboard.

If you have an access problem on Dashboard, try to fix with this page: https://github.com/kubernetes/dashboard/wiki/Access-control

If you would like to access Dashboard from anywhere without SSH port tunnel, you have to change type: ClusterIP to type: NodePort in the Dashboard's Service configuration. How? Edit the configuration with this command:

kubectl -n kube-system edit service kubernetes-dashboard

Save the edited file and check the Dashboard public port with this command:

kubectl -n kube-system get service kubernetes-dashboard

example:

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.107.176.138   <none>        443:32369/TCP   3d6h

Now you can access the Kubernetes Dashboard on 32369 port with any node IP (example: https://:32369)

more info here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above

Deployment

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Simple web service with NFS share

nginx.yaml:

apiVersion: v1
kind: Service
metadata:
  name: web-test
  labels:
    run: web-test
spec:
  type: NodePort
  ports:
   -  nodePort: 32180 #external port - available on every node IP
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    run: web-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-test
spec:
  selector:
    matchLabels:
      run: web-test
  replicas: 1
  template:
    metadata:
      labels:
        run: web-test
    spec:
      containers:
         -
           name: web-test
           image: nginx
           ports:
            - containerPort: 80
              name: web-test
           volumeMounts:
            - name: web-test
              mountPath: /usr/share/nginx/html
      volumes:
#      - hostPath:
#          path: /mnt/nas-volume/jira/docker-jira-service-desk/jira_install
#        name: jira-install
      - name: web-test
        nfs:
          server: 192.168.10.52
          path: /k8s-volumes/web1

start: kubectl apply -f nginx.yaml

kubectl get deployments -o wide
sonrisa@ubu1:~$ kubectl describe deployments/web-test | grep -i controller
  Normal  ScalingReplicaSet  3m5s  deployment-controller  Scaled up replica set web-test-7b4c895559 to 1

sonrisa@ubu1:~$ kubectl get pods | grep -i web-test-7b4c895559
web-test-7b4c895559-9hr68       1/1     Running   0          3m42s

sonrisa@ubu1:~$ kubectl describe pods web-test-7b4c895559-9hr68 | grep -i ip
IP:                 10.244.2.7

sonrisa@ubu1:~$ curl 10.244.2.7
hello world
Thu Feb 14 14:43:50 UTC 2019

Local Storage

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

local-storage.yaml:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/srv/local-data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
       claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage
kubectl get pv
kubectl get pvc
ls -hal /srv/local-data

echo "this is a local storage test" > /srv/local-data/index.html

curl http://$( kubectl describe pods/task-pv-pod | grep IP | awk '{ print $2 }' )
this is a local storage test

NFS Storage

Useful and simple central storage to store data outside of containers. Prerequisites: install "nfs-common" package (nfs client) on every kubernetes node. Example:

sudo apt-get install -y nfs-common

NFS server IP address: 192.168.99.145 NFS Share: /srv/share/web-test

Pod

You can start a simple pod for a quick test.

web-pod.yml:

apiVersion: v1
Kind: Pod
metadata:
  Name: web
  labels:
    web-test: test1
spec:
  containers:
  - name: web
    image: nginx
    volumeMounts:
    - name: webdir
      mountPath: /usr/share/nginx/html
  volumes:
  - name: webdir
    nfs:
      server: 192.168.99.145
      path: /srv/share/web-test

kubectl apply -f web-pod.yml

Expose web port: kubectl expose pod web --type=NodePort --name=web-test --selector=web-test=test1 --port=80

Check service port: kubectl get services/web-test -o wide example output:

NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
web-test   NodePort   10.111.224.9   <none>        80:30649/TCP   16m   web-test=test1

Check web page: curl <any kubernetes node IP>:30649 where the 30649 is the automatically generated port for web pod by kubernetes load balancer.

Deployment

Little bit more complex and better method is deployment.

You can generate a sample deployment yaml with this command:

kubectl run web-cluster --image=nginx --expose --port=80 --service-overrides='{ "spec": { "type": "NodePort" } }' --dry-run -o yaml > deployment.yaml

Now, edit the deployment.yaml and add some extra parameters, like replica number and NFS share and others.

Example:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  name: web-cluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: web-cluster
  type: NodePort
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    run: web-cluster
  name: web-cluster
spec:
  replicas: 1
  selector:
    matchLabels:
      run: web-cluster
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: web-cluster
    spec:
      containers:
      - image: nginx
        name: web-cluster
        ports:
        - containerPort: 80
        resources: {}
        volumeMounts:
        - name: webdir
          mountPath: /usr/share/nginx/html
      volumes:
      - name: webdir
        nfs:
          server: 192.168.99.145
          path: /srv/share/web-test/
status: {}

Create the deployment: kubectl apply -f deployment.yaml

Check the state: kubectl get deploy,pods,service

Check the web page with a curl: curl <any kubernetes node ip>:30659 Where the 30659 is the external port number of the web-cluster service created by kubernetes.

Now you can modify replica number on web-cluster with this:

 kubectl scale deployments/web-cluster --replicas=3

Check the status:

kubectl get deploy,pods,services

Remote management

Would you like administrate the Kubernetes cluster from another computer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment