Skip to content

Instantly share code, notes, and snippets.

@haproxytechblog
Created June 11, 2021 15:18
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save haproxytechblog/aae60c5d02f4096343b503d9841549c9 to your computer and use it in GitHub Desktop.
Save haproxytechblog/aae60c5d02f4096343b503d9841549c9 to your computer and use it in GitHub Desktop.
Run the HAProxy Kubernetes Ingress Controller Outside of Your Kubernetes Cluster
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
bgp: Enabled
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 172.16.0.0/16
encapsulation: IPIP
natOutgoing: Enabled
nodeSelector: all()
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
name: default
spec:
logSeverityScreen: Info
nodeToNodeMeshEnabled: true
asNumber: 65000
---
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
name: my-global-peer
spec:
peerIP: 192.168.50.21
asNumber: 65000
/usr/local/bin/haproxy-ingress-controller \
--external \
--configmap=default/haproxy-kubernetes-ingress \
--program=/usr/sbin/haproxy \
--disable-ipv6 \
--ipv4-bind-address=0.0.0.0 \
--http-bind-port=80
$ vagrant ssh controlplane
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane,master 4h7m v1.21.1
worker1 Ready <none> 4h2m v1.21.1
worker2 Ready <none> 3h58m v1.21.1
$ sudo calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+--------------------------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+--------------------------------+
| 192.168.50.21 | global | start | 00:10:03 | Active Socket: Connection |
| | | | | refused |
| 192.168.50.23 | node-to-node mesh | up | 00:16:11 | Established |
| 192.168.50.24 | node-to-node mesh | up | 00:20:08 | Established |
+---------------+-------------------+-------+----------+--------------------------------+
$ kubectl describe blockaffinities | grep -E "Name:|Cidr:"
Name: controlplane-172-16-49-64-26
Cidr: 172.16.49.64/26
Name: worker1-172-16-171-64-26
Cidr: 172.16.171.64/26
Name: worker2-172-16-189-64-26
Cidr: 172.16.189.64/26
$ vagrant ssh ingress
router id 192.168.50.21;
log syslog all;
# controlplane
protocol bgp {
local 192.168.50.21 as 65000;
neighbor 192.168.50.22 as 65000;
direct;
import filter {
if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
};
export none;
}
# worker1
protocol bgp {
local 192.168.50.21 as 65000;
neighbor 192.168.50.23 as 65000;
direct;
import filter {
if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
};
export none;
}
# worker2
protocol bgp {
local 192.168.50.21 as 65000;
neighbor 192.168.50.24 as 65000;
direct;
import filter {
if ( net ~ [ 172.16.0.0/16{26,26} ] ) then accept;
};
export none;
}
protocol kernel {
scan time 60;
#import none;
export all; # insert routes into the kernel routing table
}
protocol device {
scan time 60;
}
$ sudo systemctl restart bird
$ sudo birdc show protocols
BIRD 1.6.8 ready.
name proto table state since info
bgp1 BGP master up 23:18:17 Established
bgp2 BGP master up 23:18:17 Established
bgp3 BGP master up 23:18:59 Established
kernel1 Kernel master up 23:18:15
device1 Device master up 23:18:15
$ sudo birdc show route protocol bgp2
BIRD 1.6.8 ready.
172.16.171.64/26 via 192.168.50.23 on enp0s8 [bgp2 23:18:18] * (100) [i]
$ sudo birdc show route protocol bgp3
BIRD 1.6.8 ready.
172.16.189.64/26 via 192.168.50.24 on enp0s8 [bgp3 23:19:00] * (100) [i]
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
_gateway 0.0.0.0 255.255.255.255 UH 100 0 0 enp0s3
172.16.49.64 192.168.50.22 255.255.255.192 UG 0 0 0 enp0s8
172.16.171.64 192.168.50.23 255.255.255.192 UG 0 0 0 enp0s8
172.16.189.64 192.168.50.24 255.255.255.192 UG 0 0 0 enp0s8
192.168.50.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8
$ sudo calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+-------------+
| 192.168.50.21 | global | up | 00:32:13 | Established |
| 192.168.50.23 | node-to-node mesh | up | 00:16:12 | Established |
| 192.168.50.24 | node-to-node mesh | up | 00:20:09 | Established |
+---------------+-------------------+-------+----------+-------------+
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app
name: app
spec:
replicas: 5
selector:
matchLabels:
run: app
template:
metadata:
labels:
run: app
spec:
containers:
- name: app
image: jmalloc/echo-server
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
run: app
name: app
spec:
selector:
run: app
ports:
- name: port-1
port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: test.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app
port:
number: 80
$ kubectl apply -f app.yaml
backend default-app-port-1
mode http
balance roundrobin
option forwardfor
server SRV_1 172.16.171.67:8080 check weight 128
server SRV_2 172.16.171.68:8080 check weight 128
server SRV_3 172.16.189.68:8080 check weight 128
server SRV_4 172.16.189.69:8080 check weight 128
server SRV_5 172.16.189.70:8080 check weight 128
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment