Skip to content

Instantly share code, notes, and snippets.

Last active Feb 24, 2017
What would you like to do?
Kibana, ELB and OpenShift V3

The openshift/origin-haproxy-router is an HAProxy router that is used as an external to internal interface to OpenShift services. When you create a route you specify the hostname and service that the route is connecting.

[vagrant@local ~]$ oc get routes
NAME         HOST/PORT                   PATH      SERVICE              LABELS                                                       INSECURE POLICY   TLS TERMINATION
kibana             logging-kibana       component=support,logging-infra=support,provider=openshift                     passthrough
kibana-ops                logging-kibana-ops   component=support,logging-infra=support,provider=openshift                     passthrough
[vagrant@local ~]$ oc describe route kibana
Name:			kibana
Created:		17 hours ago
Labels:			component=support,logging-infra=support,provider=openshift
Path:			<none>
Service:		logging-kibana
TLS Termination:	passthrough
Insecure Policy:	<none>

So here we can see the host is set to

When a request comes, it is translated by OpenShift’s load-balancing layer from the domain name (Host: HTTP header) into an internal IP address and is proxied there. For example:

curl -v -k -H "Host:"
$ docker ps | grep ose-haproxy
9631032bb8ff                                        "/usr/bin/openshift-r"   12 days ago         Up 12 days                                   k8s_router.a5af550a_default-router-1-nue4x_default_a1bc5111-938a-11e5-a4e8-0800275732c8_731b314f
$ docker exec -i -t 9631032bb8ff /bin/bash


# public ssl accepts all connections and isn't checking certificates yet certificates to use will be
# determined by the next backend in the chain which may be an app backend (passthrough termination) or a backend
# that terminates encryption in this router (edge)
frontend public_ssl
  bind :443
  tcp-request  inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  # if the connection is SNI and the route is a passthrough don't use the termination backend, just use the tcp backend
  acl sni req.ssl_sni -m found
  acl sni_passthrough req.ssl_sni,map(/var/lib/haproxy/conf/ -m found
  use_backend be_tcp_%[req.ssl_sni,map(/var/lib/haproxy/conf/] if sni sni_passthrough

  # if the route is SNI and NOT passthrough enter the termination flow
  use_backend be_sni if sni

  # non SNI requests should enter a default termination backend rather than the custom cert SNI backend since it
  # will not be able to match a cert to an SNI host
  default_backend be_no_sni

So our request to will be found in the Contents of 1 1

So this sni_passthrough will be true, hence the following line will be evaluated:

use_backend be_tcp_%[req.ssl_sni,map(/var/lib/haproxy/conf/]

Lets take a look at mbaas-logging_kibana mbaas-logging_kibana-ops

So we can assume that the SNI name is and that will evaluate to:

use_backend be_tcp_mbaas-logging_kibana

This matches a section in haproxy.conf:

backend be_tcp_mbaas-logging_kibana
  balance source
  hash-type consistent
  timeout check 5000ms

  server check inter 5000ms

We can see that this will forward to which is the logging-kibana pod (the kibana proxy). If we follow the kibana-proxy log:

$ oc logs -f logging-kibana-1-8pz2t --container=kibana-proxy

And the hit we will see the following in the logs: - - [28/Jan/2016:08:20:06 +0000] "GET / HTTP/1.1" 302 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36"

So we can prove that the proxy is reached this way.

But when we try to configure this on AWS we try the same thing the request never makes it to the kibana-proxy. So we need to take a look one step back from the proxy.

Where does haproxy get it's information from?
When a route is modified that change will be registered in etcd. The openshift-router watches for these events and updates its information. Lets try that out:

$ oc edit route kibana

And we can change the host to anything:


Now, access the docker image and look at the mapping file:

$ docker exec -i -t 9631032bb8ff /bin/bash
$ vi 1 1

Using instead of will still work.

So we have been able to verify that connecting from the openshift-router works. What does not work is when going through the Elastic Load Balancer (ELB). What is strange is that we can access apps deployed in openshift through the same load balancer. The difference is that kibana is a passthrough proxy with the apps are not.

When configuring listeners for the Elastic Load Balancer you need to configure your [listeners](ht tp:// ref.html) to passthrough with regards to TLS/SSL. To accomplish this instead of using HTTPS for the Back-End Protocol use TCP.

Access haproxy stats

$ curl -v --user admin:wqODNHo2Aq http://localhost:1936

That was not that helpful for my current invesigation but might be in the future.

ConfigTemplate = "/var/lib/haproxy/conf/haproxy_template.conf"
ConfigFile = "/var/lib/haproxy/conf/haproxy.config"
HostMapFile = "/var/lib/haproxy/conf/"
EdgeHostMapFile = "/var/lib/haproxy/conf/"
SniPassThruHostMapFile = "/var/lib/haproxy/conf/"
ReencryptHostMapFile = "/var/lib/haproxy/conf/"
TcpHostMapFile = "/var/lib/haproxy/conf/"

Copy link

tj13 commented May 27, 2016

hi denbev,
i got an very strange issue regarding the openshift secure routes. i deployed the hawkular metrics component successfully, but can not access the HAWKULAR_HOSTNAME from the web browser, also no metrics displaying on the openshift web console.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment