Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
MicroK8s, Ingress and MetalLB

Ingress MetalLB

Out of the box, the MicroK8s distribution of ingress-nginx installed as the MicroK8s addon ingress binds to ports 80+443 on the node's IP address using a hostPort, as we can see here:

microk8s kubectl -n ingress describe daemonset.apps/nginx-ingress-microk8s-controller
Name:           nginx-ingress-microk8s-controller
Selector:       name=nginx-ingress-microk8s
Node-Selector:  <none>
Labels:         microk8s-application=nginx-ingress-microk8s
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 4
Current Number of Nodes Scheduled: 4
Number of Nodes Scheduled with Up-to-date Pods: 4
Number of Nodes Scheduled with Available Pods: 4
Number of Nodes Misscheduled: 0
Pods Status:  4 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           name=nginx-ingress-microk8s
  Service Account:  nginx-ingress-microk8s-serviceaccount
    Ports:       80/TCP, 443/TCP
    Host Ports:  80/TCP, 443/TCP
    Liveness:  http-get http://:10254/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
      POD_NAME:        (
      POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:           <none>
  Volumes:            <none>
Events:               <none>

This is fine for a single-node deployment, but now MicroK8s supports HA clustering we need to find a way of load-balancing our Ingress, as a multi-node cluster will have one Ingress controller per node, each bound to its own node's IP.

Enter MetalLB, a software load-balancer which works well in layer2 mode, which is also available as a MicroK8s addon metallb. We can use MetalLB to load-balance between the ingress controllers.

There's one snag though, MetalLB requires a Service resource, and the MicroK8s distribution of Ingress does not include one.

microk8s kubectl -n ingress get svc
No resources found in ingress namespace.

This gist contains the definition for a Service which should work with default deployments of the MicroK8s addons Ingress and MetalLB. It assumes that both of these addons are already enabled.

microk8s enable ingress metallb

Download this manifest ingress-service.yaml and apply it with:

microk8s kubectl apply -f ingress-service.yaml

Now there is a load-balancer which listens on an arbitrary IP and directs traffic towards one of the listening ingress controllers.

microk8s kubectl -n ingress get svc
NAME      TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
ingress   LoadBalancer   80:30029/TCP,443:30276/TCP   24h
apiVersion: v1
kind: Service
name: ingress
namespace: ingress
name: nginx-ingress-microk8s
type: LoadBalancer
# loadBalancerIP is optional. MetalLB will automatically allocate an IP from its pool if not
# specified. You can also specify one manually.
# loadBalancerIP: x.y.z.a
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443

This comment has been minimized.

Copy link

@KlimDos KlimDos commented Apr 25, 2021

Hey Jonathan,

Nicely done! It is strange that there is no info in canonical's site about how to set up ingress with LB.

BTW ingress works over Metallb BUT it is also still accessible over default node's IP, so is there a way to disable it?

even if I did not specify "host" directive in ingress object, I would like to have this service available only via one single MLB instance


This comment has been minimized.

Copy link
Owner Author

@djjudas21 djjudas21 commented Apr 25, 2021

Hey @KlimDos. Do you mean the ingress is still available on the node's IP on ports 80+443, or a high port?

[jonathan@latitude ~]$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
ingress-lb   LoadBalancer   80:31758/TCP,443:31722/TCP   82d

This is how my config looks right now. I just tested and I can get to my ingress on 80+443 on the MetalLB IP and on the node IP, so it's behaving the same as yours. Can confirm my nodes are still listening on 80+443 themselves:

jonathan@kube01:~$ sudo netstat -lan | grep :443
tcp        0      0        ESTABLISHED
tcp        0      0        ESTABLISHED

To be honest I don't know how or why it is doing this, whether it's a necessary part of how Kubernetes/MetalLB works, but if it's a problem in your environment, you could solve it with host firewalling. Delete the global firewall rules for 80+443, and create IP-specific firewall rules for the MetalLB IP.


This comment has been minimized.

Copy link

@KlimDos KlimDos commented Apr 26, 2021

Hey @djjudas21,

Yes, u right. exposed deployment keeps being available on k8s node host IP (I'm using single node installation)



apiVersion: v1
kind: Namespace
  name: terraform-report-ns
apiVersion: v1
kind: Service
  name: terraform-report-svc
  namespace: terraform-report-ns
  - name: plaintext
    port: 80
    targetPort: 80
    app: terraform-report-app
  type: ClusterIP
apiVersion: apps/v1
kind: Deployment
  name: terraform-report-deploy
  namespace: terraform-report-ns
  replicas: 2
      app: terraform-report-app
        app: terraform-report-app
        terraform: "false"
      - image: klimdos/site-api:0.0.2-alpha
        name: terraform-report-app-main-container
        - containerPort: 80
            cpu: 500m
            memory: 128Mi
kind: Ingress
  name: terraform-report-ingress
  namespace: terraform-report-ns
  - http:
      - backend:
          serviceName: terraform-report-svc
          servicePort: 80

nginx-ingress-microk8s exposed into LB

NAMESPACE             NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
default               kubernetes                  ClusterIP     <none>         443/TCP                      13d
kube-system           kube-dns                    ClusterIP    <none>         53/UDP,53/TCP,9153/TCP       13d
kube-system           metrics-server              ClusterIP    <none>         443/TCP                      13d
kube-system           kubernetes-dashboard        ClusterIP     <none>         443/TCP                      13d
kube-system           dashboard-metrics-scraper   ClusterIP    <none>         8000/TCP                     13d
argocd                argocd-dex-server           ClusterIP    <none>         5556/TCP,5557/TCP,5558/TCP   22h
argocd                argocd-metrics              ClusterIP    <none>         8082/TCP                     22h
argocd                argocd-redis                ClusterIP    <none>         6379/TCP                     22h
argocd                argocd-repo-server          ClusterIP    <none>         8081/TCP,8084/TCP            22h
argocd                argocd-server-metrics       ClusterIP   <none>         8083/TCP                     22h
argocd                argocd-server               LoadBalancer   80:31597/TCP,443:31083/TCP   22h
terraform-report-ns   terraform-report-svc        ClusterIP   <none>         80/TCP                       17h
ingress               ingress                     LoadBalancer   80:30668/TCP,443:32425/TCP   19h

It is not an issue, coz specifying "host" directive is blocking by-ip requests and force users to use DNS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment