The openshift/origin-haproxy-router is an HAProxy router that is used as an external to internal interface to OpenShift services. When you create a route you specify the hostname and service that the route is connecting.
[vagrant@local ~]$ oc get routes
NAME HOST/PORT PATH SERVICE LABELS INSECURE POLICY TLS TERMINATION
kibana kibana.local.feedhenry.io logging-kibana component=support,logging-infra=support,provider=openshift passthrough
kibana-ops kibana-ops.example.com logging-kibana-ops component=support,logging-infra=support,provider=openshift passthrough
[vagrant@local ~]$ oc describe route kibana
Name: kibana
Created: 17 hours ago
Labels: component=support,logging-infra=support,provider=openshift
Host: kibana.local.feedhenry.io
Path: <none>
Service: logging-kibana
TLS Termination: passthrough
Insecure Policy: <none>
So here we can see the host is set to kibana.local.feedhenry.io
.
When a request comes, it is translated by OpenShift’s load-balancing layer from the domain name (Host: HTTP header) into an internal IP address and is proxied there. For example:
curl -v -k -H "Host: kibana.local.feedhenry.io" https://kibana.local.feedhenry.io
$ docker ps | grep ose-haproxy
9631032bb8ff registry.access.redhat.com/openshift3/ose-haproxy-router:v3.1.0.4 "/usr/bin/openshift-r" 12 days ago Up 12 days k8s_router.a5af550a_default-router-1-nue4x_default_a1bc5111-938a-11e5-a4e8-0800275732c8_731b314f
$ docker exec -i -t 9631032bb8ff /bin/bash
haproxy.config:
# public ssl accepts all connections and isn't checking certificates yet certificates to use will be
# determined by the next backend in the chain which may be an app backend (passthrough termination) or a backend
# that terminates encryption in this router (edge)
frontend public_ssl
bind :443
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
# if the connection is SNI and the route is a passthrough don't use the termination backend, just use the tcp backend
acl sni req.ssl_sni -m found
acl sni_passthrough req.ssl_sni,map(/var/lib/haproxy/conf/os_sni_passthrough.map) -m found
use_backend be_tcp_%[req.ssl_sni,map(/var/lib/haproxy/conf/os_tcp_be.map)] if sni sni_passthrough
# if the route is SNI and NOT passthrough enter the termination flow
use_backend be_sni if sni
# non SNI requests should enter a default termination backend rather than the custom cert SNI backend since it
# will not be able to match a cert to an SNI host
default_backend be_no_sni
So our request to https://kibana.local.feedhenry.io will be found in the os_sni_passthrough.map
Contents of os_sni_passthrough.map
:
kibana.local.feedhenry.io 1
kibana-ops.example.com 1
So this sni_passthrough
will be true, hence the following line will be evaluated:
use_backend be_tcp_%[req.ssl_sni,map(/var/lib/haproxy/conf/os_tcp_be.map)]
Lets take a look at os_tcp_be.map
:
kibana.local.feedhenry.io mbaas-logging_kibana
kibana-ops.example.com mbaas-logging_kibana-ops
So we can assume that the SNI name is kibana.local.feedhenry.io
and that will evaluate to:
use_backend be_tcp_mbaas-logging_kibana
This matches a section in haproxy.conf:
backend be_tcp_mbaas-logging_kibana
balance source
hash-type consistent
timeout check 5000ms
server 10.1.0.159:3000 10.1.0.159:3000 check inter 5000ms
We can see that this will forward to 10.1.0.159:3000
which is the logging-kibana pod (the kibana proxy).
If we follow the kibana-proxy log:
$ oc logs -f logging-kibana-1-8pz2t --container=kibana-proxy
And the hit https://kibana.local.feedhenry.io
we will see the following in the logs:
10.1.0.1 - - [28/Jan/2016:08:20:06 +0000] "GET / HTTP/1.1" 302 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36"
So we can prove that the proxy is reached this way.
But when we try to configure this on AWS we try the same thing the request never makes it to the kibana-proxy. So we need to take a look one step back from the proxy.
Where does haproxy get it's information from?
When a route is modified that change will be registered in etcd. The openshift-router watches for these events and updates its information.
Lets try that out:
$ oc edit route kibana
And we can change the host to anything:
host: bajja.local.feedhenry.io
Now, access the docker image and look at the mapping file:
$ docker exec -i -t 9631032bb8ff /bin/bash
$ vi os_sni_passthrough.map
kibana-ops.example.com 1
bajja.local.feedhenry.io 1
Using https://bajja.local.feedhenry.io instead of kibana.local.feedhenry.io will still work.
So we have been able to verify that connecting from the openshift-router works. What does not work is when going through the Elastic Load Balancer (ELB). What is strange is that we can access apps deployed in openshift through the same load balancer. The difference is that kibana is a passthrough proxy with the apps are not.
When configuring listeners for the Elastic Load Balancer you need to configure your [listeners](ht tp://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/using-elb-listenerconfig-quick ref.html) to passthrough with regards to TLS/SSL. To accomplish this instead of using HTTPS
for the Back-End Protocol use TCP
.
$ curl -v --user admin:wqODNHo2Aq http://localhost:1936
That was not that helpful for my current invesigation but might be in the future.
ConfigTemplate = "/var/lib/haproxy/conf/haproxy_template.conf"
ConfigFile = "/var/lib/haproxy/conf/haproxy.config"
HostMapFile = "/var/lib/haproxy/conf/os_http_be.map"
EdgeHostMapFile = "/var/lib/haproxy/conf/os_edge_http_be.map"
SniPassThruHostMapFile = "/var/lib/haproxy/conf/os_sni_passthrough.map"
ReencryptHostMapFile = "/var/lib/haproxy/conf/os_reencrypt.map"
TcpHostMapFile = "/var/lib/haproxy/conf/os_tcp_be.map"
hi denbev,
i got an very strange issue regarding the openshift secure routes. i deployed the hawkular metrics component successfully, but can not access the HAWKULAR_HOSTNAME from the web browser, also no metrics displaying on the openshift web console.