Skip to content

Instantly share code, notes, and snippets.

@d33d33
Last active March 17, 2021 07:18
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save d33d33/d95fc38e7b94dc4e41533196af228c8e to your computer and use it in GitHub Desktop.
Save d33d33/d95fc38e7b94dc4e41533196af228c8e to your computer and use it in GitHub Desktop.
OVH ingress LB

OVH ingress LB

This gist describe how to deploy a K8S LB at OVH and preserve source IP

Howto

1. Install the NGINX Ingress Controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

2. Deploy a LB with proxy protocol support

kubectl apply -f https://gist.githubusercontent.com/d33d33/d95fc38e7b94dc4e41533196af228c8e/raw/ingress-lb.yml

3. Patch the Ingress Controller to support proxy protocol

kubectl -n ingress-nginx patch configmap nginx-configuration -p "$(curl -s https://gist.githubusercontent.com/d33d33/d95fc38e7b94dc4e41533196af228c8e/raw/patch-ingress-configmap.yml)"

4. Restart the Ingress Controller

kubectl -n ingress-nginx get pod | grep 'ingress' | cut -d " " -f1 - | xargs -n1 kubectl -n ingress-nginx delete pod

Testing

1. Deploy a simple echo service

kubectl apply -f https://gist.githubusercontent.com/d33d33/d95fc38e7b94dc4e41533196af228c8e/raw/echo-deployment.yml

2. Create a single Service Ingress

kubectl apply -f https://gist.githubusercontent.com/d33d33/d95fc38e7b94dc4e41533196af228c8e/raw/echo-ingress.yml
apiVersion: v1
kind: Namespace
metadata:
name: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
namespace: echo
labels:
app: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- containerPort: 80
- containerPort: 443
apiVersion: v1
kind: Service
metadata:
name: echo-service
namespace: echo
spec:
selector:
app: echo
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
namespace: echo
spec:
backend:
serviceName: echo-service
servicePort: 80
kind: Service
apiVersion: v1
metadata:
name: ingress-lb
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v1"
spec:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
externalTrafficPolicy: Local
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
type: LoadBalancer
data:
use-proxy-protocol: "true"
proxy-real-ip-cidr: "10.108.0.0/14"
use-forwarded-headers: "false"
http-snippet: |
geo $realip_remote_addr $is_lb {
default 0;
10.108.0.0/14 1;
}
server-snippet: |
if ($is_lb != 1) {
return 403;
}
@TwanoO67
Copy link

Hello,

I just tried that on my OVH managed k8s, but I'm still getting addresses like "10.X.X.X" in the real-ip field of echo.
Do you have some tips to make that work ?

@qfayet
Copy link

qfayet commented Oct 10, 2019

There must be a problem of compatibility with last release of ingress-nginx.
With previous release, it works for me :
https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.25.1/deploy/static/mandatory.yaml

@qfayet
Copy link

qfayet commented Oct 12, 2019

In fact, I think rather that an improvement has been made to the current version of ingress-nginx.
And now, the patch is no longer necessary, just configure the service this way:

data:
  use-proxy-protocol: "true"

@d33d33
Copy link
Author

d33d33 commented Oct 16, 2019

Just fixed the patch proxy-real-ip-cidr: "0.0.0.0/32" => proxy-real-ip-cidr: "10.108.0.0/14"

@cglacet
Copy link

cglacet commented Feb 24, 2021

Is this still how to do thing nowadays? I see that the tutorial proposes a slightly different approach (the patch is very different).

When use any of the proposed hello-world configuration I get a empty response (on the other hand, if I plug a hello-world service right after my load balancer it works just fine).

I find this output a bit strange (no endpoint):

❯ kubectl describe service -n ingress-nginx ingress-lb
Name:                     ingress-lb
Namespace:                ingress-nginx
Labels:                   <none>
Annotations:              lb.k8s.ovh.net/egress-ips:  <some IPS>
                          service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: v1
Selector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type:                     LoadBalancer
IP:                       <IP>
LoadBalancer Ingress:     ip-xxx-xxx-xxx-xxx.sbg.lb.ovh.net
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  32555/TCP
Endpoints:                <none>
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  32411/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30787
Events:
  Type    Reason                Age                From                Message
  ----    ------                ----               ----                -------
  Normal  EnsuringLoadBalancer  21m (x2 over 22m)  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   21m (x2 over 21m)  service-controller  Ensured load balancer

Any idea how to troubleshot this or maybe even what the problem could be?

Thanks.

@zffocussss
Copy link

Is this still how to do thing nowadays? I see that the tutorial proposes a slightly different approach (the patch is very different).

When use any of the proposed hello-world configuration I get a empty response (on the other hand, if I plug a hello-world service right after my load balancer it works just fine).

I find this output a bit strange (no endpoint):

❯ kubectl describe service -n ingress-nginx ingress-lb
Name:                     ingress-lb
Namespace:                ingress-nginx
Labels:                   <none>
Annotations:              lb.k8s.ovh.net/egress-ips:  <some IPS>
                          service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: v1
Selector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type:                     LoadBalancer
IP:                       <IP>
LoadBalancer Ingress:     ip-xxx-xxx-xxx-xxx.sbg.lb.ovh.net
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  32555/TCP
Endpoints:                <none>
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  32411/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30787
Events:
  Type    Reason                Age                From                Message
  ----    ------                ----               ----                -------
  Normal  EnsuringLoadBalancer  21m (x2 over 22m)  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   21m (x2 over 21m)  service-controller  Ensured load balancer

Any idea how to troubleshot this or maybe even what the problem could be?

Thanks.

@cglacet how do you bind the dedicated public IP to the public loadbalancer?

@zffocussss
Copy link

for example.I have a stable public IP. 1.2.3.4 how can I bind 1.2.3.4 to LoadBalancer service type?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment