Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
MicroK8s, Ingress and MetalLB

Ingress MetalLB

Out of the box, the MicroK8s distribution of ingress-nginx installed as the MicroK8s addon ingress binds to ports 80+443 on the node's IP address using a hostPort, as we can see here:

microk8s kubectl -n ingress describe daemonset.apps/nginx-ingress-microk8s-controller
Name:           nginx-ingress-microk8s-controller
Selector:       name=nginx-ingress-microk8s
Node-Selector:  <none>
Labels:         microk8s-application=nginx-ingress-microk8s
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 4
Current Number of Nodes Scheduled: 4
Number of Nodes Scheduled with Up-to-date Pods: 4
Number of Nodes Scheduled with Available Pods: 4
Number of Nodes Misscheduled: 0
Pods Status:  4 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           name=nginx-ingress-microk8s
  Service Account:  nginx-ingress-microk8s-serviceaccount
  Containers:
   nginx-ingress-microk8s:
    Image:       quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.25.1
    Ports:       80/TCP, 443/TCP
    Host Ports:  80/TCP, 443/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
      --publish-status-address=127.0.0.1
    Liveness:  http-get http://:10254/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:           <none>
  Volumes:            <none>
Events:               <none>

This is fine for a single-node deployment, but now MicroK8s supports HA clustering we need to find a way of load-balancing our Ingress, as a multi-node cluster will have one Ingress controller per node, each bound to its own node's IP.

Enter MetalLB, a software load-balancer which works well in layer2 mode, which is also available as a MicroK8s addon metallb. We can use MetalLB to load-balance between the ingress controllers.

There's one snag though, MetalLB requires a Service resource, and the MicroK8s distribution of Ingress does not include one.

microk8s kubectl -n ingress get svc
No resources found in ingress namespace.

This gist contains the definition for a Service which should work with default deployments of the MicroK8s addons Ingress and MetalLB. It assumes that both of these addons are already enabled.

microk8s enable ingress metallb

Download this manifest ingress-service.yaml and apply it with:

microk8s kubectl apply -f ingress-service.yaml

Now there is a load-balancer which listens on an arbitrary IP and directs traffic towards one of the listening ingress controllers.

microk8s kubectl -n ingress get svc
NAME      TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
ingress   LoadBalancer   10.152.183.141   192.168.0.61   80:30029/TCP,443:30276/TCP   24h
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
# loadBalancerIP is optional. MetalLB will automatically allocate an IP from its pool if not
# specified. You can also specify one manually.
# loadBalancerIP: x.y.z.a
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
@KlimDos

This comment has been minimized.

Copy link

@KlimDos KlimDos commented Apr 25, 2021

Hey Jonathan,

Nicely done! It is strange that there is no info in canonical's site about how to set up ingress with LB.

BTW ingress works over Metallb BUT it is also still accessible over default node's IP, so is there a way to disable it?

even if I did not specify "host" directive in ingress object, I would like to have this service available only via one single MLB instance

@djjudas21

This comment has been minimized.

Copy link
Owner Author

@djjudas21 djjudas21 commented Apr 25, 2021

Hey @KlimDos. Do you mean the ingress is still available on the node's IP on ports 80+443, or a high port?

[jonathan@latitude ~]$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
ingress-lb   LoadBalancer   10.152.183.23   192.168.0.61   80:31758/TCP,443:31722/TCP   82d

This is how my config looks right now. I just tested and I can get to my ingress on 80+443 on the MetalLB IP 192.168.0.61 and on the node IP 192.168.0.49, so it's behaving the same as yours. Can confirm my nodes are still listening on 80+443 themselves:

jonathan@kube01:~$ sudo netstat -lan | grep :443
tcp        0      0 192.168.0.49:37640      10.152.183.1:443        ESTABLISHED
tcp        0      0 192.168.0.49:37820      10.152.183.1:443        ESTABLISHED

To be honest I don't know how or why it is doing this, whether it's a necessary part of how Kubernetes/MetalLB works, but if it's a problem in your environment, you could solve it with host firewalling. Delete the global firewall rules for 80+443, and create IP-specific firewall rules for the MetalLB IP.

@KlimDos

This comment has been minimized.

Copy link

@KlimDos KlimDos commented Apr 26, 2021

Hey @djjudas21,

Yes, u right. exposed deployment keeps being available on k8s node host IP (I'm using single node installation)

image

manifest:

apiVersion: v1
kind: Namespace
metadata:
  name: terraform-report-ns
---
apiVersion: v1
kind: Service
metadata:
  name: terraform-report-svc
  namespace: terraform-report-ns
spec:
  ports:
  - name: plaintext
    port: 80
    targetPort: 80
  selector:
    app: terraform-report-app
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: terraform-report-deploy
  namespace: terraform-report-ns
spec:
  replicas: 2
  selector:
    matchLabels:
      app: terraform-report-app
  template:
    metadata:
      labels:
        app: terraform-report-app
        terraform: "false"
    spec:
      containers:
      - image: klimdos/site-api:0.0.2-alpha
        name: terraform-report-app-main-container
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
            memory: 128Mi
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: terraform-report-ingress
  namespace: terraform-report-ns
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: terraform-report-svc
          servicePort: 80

nginx-ingress-microk8s exposed into LB

NAMESPACE             NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
default               kubernetes                  ClusterIP      10.152.183.1     <none>         443/TCP                      13d
kube-system           kube-dns                    ClusterIP      10.152.183.10    <none>         53/UDP,53/TCP,9153/TCP       13d
kube-system           metrics-server              ClusterIP      10.152.183.34    <none>         443/TCP                      13d
kube-system           kubernetes-dashboard        ClusterIP      10.152.183.9     <none>         443/TCP                      13d
kube-system           dashboard-metrics-scraper   ClusterIP      10.152.183.16    <none>         8000/TCP                     13d
argocd                argocd-dex-server           ClusterIP      10.152.183.55    <none>         5556/TCP,5557/TCP,5558/TCP   22h
argocd                argocd-metrics              ClusterIP      10.152.183.20    <none>         8082/TCP                     22h
argocd                argocd-redis                ClusterIP      10.152.183.78    <none>         6379/TCP                     22h
argocd                argocd-repo-server          ClusterIP      10.152.183.83    <none>         8081/TCP,8084/TCP            22h
argocd                argocd-server-metrics       ClusterIP      10.152.183.204   <none>         8083/TCP                     22h
argocd                argocd-server               LoadBalancer   10.152.183.136   192.168.10.0   80:31597/TCP,443:31083/TCP   22h
terraform-report-ns   terraform-report-svc        ClusterIP      10.152.183.115   <none>         80/TCP                       17h
ingress               ingress                     LoadBalancer   10.152.183.100   192.168.10.1   80:30668/TCP,443:32425/TCP   19h

It is not an issue, coz specifying "host" directive is blocking by-ip requests and force users to use DNS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment