Skip to content

Instantly share code, notes, and snippets.

@joestringer
Last active April 20, 2021 17:06
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save joestringer/60a5f53d59e57274ed4c2a1736a7b101 to your computer and use it in GitHub Desktop.
Save joestringer/60a5f53d59e57274ed4c2a1736a7b101 to your computer and use it in GitHub Desktop.
MicroK8s development environment setup for Cilium

Set up microk8s with Cilium for development

Microk8s is a Canonical project to provide a kubernetes environment for local development, similar to minikube but without requiring a separate VM to manage. These instructions describe setting it up for common development use cases with Cilium and may be helpful in particular for testing BPF kernel extensions with Cilium.

Microk8s will run its own version of docker for the kubernetes runtime, so if you have an existing docker installation then this may be confusing, for instance when building images the image may be stored with one of these installations and not the other. This guide assumes you will run both docker daemon instances, and use your existing docker-ce for building Cilium while using the microk8s.docker daemon instance for the runtime of your kubernetes pods.

Requirements

In this howto setup was run on packet.net c1.small.x86 node with Ubuntu 17.10.

Quick howto:

# apt-get install snapd apt-transport-https ca-certificates curl software-properties-common build-essential flex bison clang llvm libelf-dev libssl-dev libcap-dev gcc-multilib libncurses5-dev pkg-config
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# apt-get update
# apt-get install docker-ce

Install and set up golang and Cilium

# wget https://dl.google.com/go/go1.11.2.linux-amd64.tar.gz
# tar xvf go1.11.2.linux-amd64.tar.gz -C /usr/local/
# mkdir -p ~/go/src/github.com/cilium/

And add to bashrc:

export GOPATH=/<home>/go/
export GOROOT=/usr/local/go/
export PATH=/snap/bin/:/usr/local/go/bin/:/root/go/bin/:$PATH

Then follow with building Cilium itself:

# cd ~/go/src/github.com/cilium/
# git clone https://github.com/cilium/cilium.git && cd cilium/
# go get -u github.com/gordonklaus/ineffassign
# go get -u github.com/jteeuwen/go-bindata/...
# SKIP_DOCS=true make

Install and set up μK8s and Cilium

Note that microk8s has, since version 1.14, changed to use containerd as the runtime. The instructions below will only work with 1.13 or below. If you're looking to use Cilium with MicroK8s 1.14, feel free to reach out in #kubernetes in Cilium slack for assistance.

For stable version:

# snap install microk8s --channel=1.13/stable --classic                                          

Configure microk8s to use Cilium as a CNI, and restart it to make the settings take effect:

# echo "--allow-privileged" >> /var/snap/microk8s/current/args/kube-apiserver
# sed -i 's/--network-plugin=kubenet/--network-plugin=cni/g'  /var/snap/microk8s/current/args/kubelet
# sed -i 's/--cni-bin-dir=${SNAP}\/opt/--cni-bin-dir=\/opt/g'  /var/snap/microk8s/current/args/kubelet
# microk8s.disable
# microk8s.enable

Set up aliases to use μK8s for bashrc

If you don't yet have kubectl installed, you can alias kubectl to gain access to the cluster:

snap alias microk8s.kubectl kubectl

Otherwise, you can configure your existing kubectl to point to microk8s:

# export KUBECONFIG=/snap/microk8s/current/client.config

Set up Cilium

# kubectl create -n kube-system -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/addons/etcd/standalone-etcd.yaml
# kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.12/cilium.yaml
# kubectl -n kube-system edit ds cilium                                         

If you have trouble with the above steps, check the Troubleshooting section.

  • Set docker socket hostpath to point within snap path (So that Cilium can associate container labels with endpoints)

Example configuration:

      volumes:                                                                  
      - hostPath:                                                               
          path: /var/run/cilium                                                 
          type: DirectoryOrCreate                                               
        name: cilium-run                                                        
      - hostPath:                                                               
          path: /sys/fs/bpf                                                     
          type: DirectoryOrCreate                                               
        name: bpf-maps                                                          
      - hostPath:                                                               
          path: /var/snap/microk8s/current/docker.sock                          
          type: Socket                                                          
        name: docker-socket                                                     

Set up other μK8s services

# microk8s.enable dns registry
  • The registry is available on localhost:32000 (via NodePort).

Pushing new custom versions of Cilium

For these steps, you are recommended to have a recent version of docker-ce so that multi-stage builds work correctly. As of Nov 2018, the version of docker provided with microk8s is not new enough (17.03, Issue)

These steps work by:

  1. Using the docker-ce version of docker client to build and update the docker-ce daemon with the new Cilium image
  2. Using docker-ce client docker binary to push the image into the k8s-deployed docker registry
  3. Using the microk8s.docker client to pull the new image from the local registry into the docker daemon provided by microk8s
  4. Relying on the registry URI for the docker image for k8s to pull the image from the microk8s.docker daemon.

Building using docker-ce

  • Make your local changes to your Cilium repository.
# DOCKER_IMAGE_TAG="my-image" make docker-image
# docker tag cilium/cilium:my-image localhost:32000/cilium/cilium:my-image
# docker push localhost:32000/cilium/cilium:my-image

This uses your local docker-ce (and docker daemon hosted at /var/run/docker.sock) to push into the registry that was configured above.

If you have trouble with the above steps, check the Troubleshooting section.

Pre-pulling new images into microk8s.docker

The below instructions use the microk8s.docker via microk8s, which is hosted at /var/snap/microk8s/current/docker.sock.

To roll out the new Cilium with the local registry reliably, I found that it was helpful to deploy this prepull YAML; otherwise the connection for fetching the image tends to get reset during container startup, which puts the node into a bad state:

apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
  name: prepull
  namespace: container-registry
spec:
  selector:
    matchLabels:
      name: prepull
  template:
    metadata:
      labels:
        name: prepull
    spec:
      initContainers:
      - name: prepull 
        image: docker
        command: ["docker", "pull", "localhost:32000/cilium/cilium:my-image"]
        volumeMounts:
        - name: docker
          mountPath: /var/run
      volumes:
      - name: docker
        hostPath:
          path: /var/snap/microk8s/current/
      containers:
      - name: pause
        image: gcr.io/google_containers/pause

# kubectl create -f prepull.yaml -n container-registry 

When you want to re-pull the image again:

# kubectl delete po -n container-registry -l name=prepull

Update Cilium

Then, edit your Cilium DS YAML to point to the new tag, replacing image: docker.io/cilium/cilium:v1.3.0 with image: localhost:32000/cilium/cilium:my-image:

# kubectl -n kube-system edit ds cilium

Set image and imagePullPolicy:

        image: localhost:32000/cilium/cilium:my-image
        imagePullPolicy: Never
        lifecycle:
          postStart:
            exec:
              command:
              - /cni-install.sh
          preStop:
            exec:
              command:
              - /cni-uninstall.sh

And rollout:

# kubectl -n kube-system rollout status ds cilium

If the tag is already pointing to your custom image, you should just need to delete the pods:

# kubectl -n kube-system delete po -l k8s-app=cilium
# kubectl -n kube-system rollout status ds cilium

Rolling back

If the rollout gets stuck it can be debugged through ...

# kubectl get pods --all-namespaces -o wide
NAMESPACE            NAME                                    READY   STATUS         RESTARTS   AGE    IP              NODE   NOMINATED NODE
[...]
kube-system          cilium-hbcs8                            0/1     ErrImagePull   0          75s    147.75.80.23    test   <none>
[...]
# kubectl describe pod -n kube-system cilium-hbcs8
[...]
  Warning  Failed     30s                    kubelet, test      Failed to pull image "localhost:32000/cilium/cilium:my-image": rpc error: code = Unknown desc = Error while pulling image: Get http://localhost:32000/v1/repositories/cilium/cilium/images: read tcp localhost:53302->127.0.0.1:32000: read: connection reset by peer
  Normal   BackOff    4s (x4 over 103s)      kubelet, test      Back-off pulling image "localhost:32000/cilium/cilium:my-image"
  Warning  Failed     4s (x4 over 103s)      kubelet, test      Error: ImagePullBackOff
[...]

... e.g. in this case the imagePullPolicy was probably set to Always.

The daemon set updates are undone via:

# kubectl -n kube-system rollout undo ds cilium

Test workload

Check if Cilium is up and running:

# kubectl get pods --all-namespaces -o wide
# kubectl -n kube-system logs --timestamps cilium-1234

Deploying a sample application for testing Cilium w/o policy first:

# kubectl create -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/minikube/http-sw-app.yaml
# kubectl exec -it -n kube-system cilium-1234 -- cilium endpoint list
# kubectl exec -it tiefighter -- netperf -t TCP_STREAM -H 10.23.177.124
[...]

Force endpoint regeneration:

# kubectl delete po tiefighter

Troubleshooting

General microk8s troubleshooting steps may reveal the problem. A few common issues are also documented below.

Cilium EOF when attempting to reach docker

Ensure that the docker socket has been updated in the Cilium-DS YAML:

https://gist.github.com/joestringer/60a5f53d59e57274ed4c2a1736a7b101#set-up-cilium

Restart μK8s

In case there is a change in host IP, you can restart kubernetes API server the following way in order to propagte the new IP to all kubernetes cluster members:

# microk8s.stop
# microk8s.start

Problems pushing to docker registry

The following error may occur when IP connectivity to the registry is not available:

# docker push localhost:32000/cilium/cilium:my-image
The push refers to repository [localhost:32000/cilium/cilium]
Get http://localhost:32000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

This may occur for multiple reasons:

  • Your exernal IP has changed (for instance, your laptop was suspended and restored on a new network)
    • For this case, you can follow the instructions above to Restart μK8s.
  • Your localhost attempts to connect to the registry via IPv6
    • The docker daemon is not listening on IPv4, so if the docker destination of localhost:32000 resolves to IPv6 this may cause timeouts. To overcome this, use 127.0.0.1 instead of localhost, or remove the IPv6 (::1) host alias from /etc/hosts.

Problems deploying Cilium daemon set

When deploying cilium daemon set for the first time, the following error may occur. Make sure to have --allow-privileged option set in kube-apiserver and apiserver reloaded:

# kubectl create -n kube-system -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/addons/etcd/standalone-etcd.yaml
[...]
The DaemonSet "cilium" is invalid: 
* spec.template.spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy
* spec.template.spec.initContainers[0].securityContext.privileged: Forbidden: disallowed by cluster policy

Misc

Some quick-links for general troubleshooting:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment