Skip to content

Instantly share code, notes, and snippets.

@kekru
Last active March 14, 2024 10:30
Show Gist options
  • Save kekru/c09dbab5e78bf76402966b13fa72b9d2 to your computer and use it in GitHub Desktop.
Save kekru/c09dbab5e78bf76402966b13fa72b9d2 to your computer and use it in GitHub Desktop.
nginx TLS SNI routing, based on subdomain pattern

Nginx TLS SNI routing, based on subdomain pattern

Nginx can be configured to route to a backend, based on the server's domain name, which is included in the SSL/TLS handshake (Server Name Indication, SNI).
This works for http upstream servers, but also for other protocols, that can be secured with TLS.

prerequisites

  • at least nginx 1.15.9 to use variables in ssl_certificate and ssl_certificate_key.
  • check nginx -V for the following:
    ...
    TLS SNI support enabled
    ...
    --with-stream_ssl_module 
    --with-stream_ssl_preread_module

It works well with the nginx:1.15.9-alpine docker image.

non terminating, TLS pass through

Pass the TLS stream to an upstream server, based on the domain name from TLS SNI field. This does not terminate TLS.
The upstream server can serve HTTPS or other TLS secured TCP responses.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
  }   
 
  server {
    listen 443; 
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
    
    proxy_pass $targetBackend;       
    ssl_preread on;
  }
}

terminating TLS, forward TCP

Terminate TLS and forward the plain TCP to the upstream server.

stream {  

  map $ssl_server_name $targetBackend {
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
  }

  map $ssl_server_name $targetCert {
    ab.mydomain.com /certs/server-cert1.pem;
    xy.mydomain.com /certs/server-cert2.pem;
  }

  map $ssl_server_name $targetCertKey {
    ab.mydomain.com /certs/server-key1.pem;
    xy.mydomain.com /certs/server-key2.pem;
  }
  
  server {
    listen 443 ssl; 
    ssl_protocols       TLSv1.2;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
      
    proxy_pass $targetBackend;
  } 
}

Choose upstream based on domain pattern

The domain name can be matched by a regex pattern, and extracted to variables. See regex_names.
This can be used to choose a backend/upstream based on the pattern of a (sub)domain. This is inspired by robszumski/k8s-service-proxy.

The following configuration extracts a subdomain into variables and uses them to create the upstream server name.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ~^(?<app>.+)-(?<namespace>.+).mydomain.com$ $app-public.$namespace.example.com:8080;
  }
  ...
}

Your Nginx should be reachable over the wildcard subdomain *.mydomain.com.
A request to shop-staging.mydomain.com will be forwarded to shop-public.staging.example.com:8080.

K8s service exposing by pattern

In Kubernetes, you can use this to expose all services with a specific name pattern.
This configuration exposes all service which names end with -public.
A request to shop-staging-9999.mydomain.com will be forwarded to shop-public in the namespace staging on port 9999.
You will also need to update the resolver, see below.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ~^(?<service>.+)-(?<namespace>.+)-(?<port>.+).mydomain.com$ $service-public.$namespace.svc.cluster.local:$port;
  }
  
  server {
    ...
    resolver kube-dns.kube-system.svc.cluster.local;
    ...
  }
}
@rayray221
Copy link

@kekru is it possible to have multiple configurations for terminating TLS on the same reverse proxy? I currently have a https upstream server defined and working, terminating at the proxy. I would like to add another server to the configuration, that would terminate on the backend server. I have tried adding a stream directive with a similar configuration as the above to my config, but nginx seems to ignore it and routes it to my default site anyway. Is this even possible? Any help is greatly appreciated!

@kekru
Copy link
Author

kekru commented May 11, 2020

Hi @rayray221,
sorry for the late answer. Did you already find a solution?

Do I understand correctly? You have one nginx. And you want to configure at the same time:

  • non terminating, TLS pass through for hello.example.com
  • terminating TLS, forward TCP for world.example.com

I'm not a super nginx expert, but I think it will work, but not on the same port.
I think you need something like this:

stream {  
   
   ... # content see above, in the artice
   ...

   server { # non terminating, TLS pass through
    listen 443; 
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
    
    proxy_pass $targetBackend; 
    ssl_preread on;
  }
  
  server { # terminating TLS, forward TCP
    # here another port
    listen 8443 ssl; 
    ssl_protocols       TLSv1.2;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
      
    proxy_pass $targetBackend;
  } 
}

As you see, I choose two different ports (443 and 8443)
But, I'm not sure, I did not test this

@rayray221
Copy link

Thanks @kekru. I ended up terminating both on the reverse proxy. To be honest, the reason I wanted to have multiple terminations was more out of laziness than need. I had an existing Nginx web server that was terminating already, and was adding an additional website under the same IP. The existing one has a bit of a complicated setup and I didn't want to spend the time troubleshooting it. All in all, I did get this working and it is all terminating at my reverse proxy which honestly is the desired configuration anyway.

Appreciate your response!

@LeonDragon
Copy link

I am a newbie in this part. But I have a naive question: This configuration is anything related to the DNS server? or just for the webserver (i.e., VPS) only. Let say, I am not configuring anything in the DNS server (such as subdomain) and my domain is www.example.com. The server is configured to "extract a subdomain into variables" as your code.

  • When a client request a content at abc.example.com. What is the DNS server behavior? the DNS server will pass through to the designate webserver with the metadata [abc.example.com] for the server, and the server will know which part of the website, the client wants to connect?

Thank you in advance!

@kekru
Copy link
Author

kekru commented Jun 24, 2020

Hi @LeonDragon,
your DNS server must return the IP of your webserver, wenn abc.example.com is requested.

For example, if your webserver's IP is 11.22.33.44:

  • You can use a wildcard A record in your DNS: A *.example.com 11.22.33.44 (I prefer this)
  • Or a simple A record: A abc.example.com 11.22.33.44

Then your webserver gets the domain name abc.example.com in the HTTP Host header.
It can extract parts of the domain and knows which parts of the website it should show, or which other server it should delegate to

@LeonDragon
Copy link

Thank @kekru, I understand now :)

@blackandred
Copy link

Good post, thanks.

@bluefangs
Copy link

Hi @kekru,

I have the following question:

I have a couple of HTTPS services that are running inside docker containers. I have an nginx container also set up so that it redirects URL to the relevant containers. Each of the https services in the respective containers use a certificate with a wildcard dns. In my case:

[ alternate_names ]

DNS.1        = *.myapps.local

I have configured nginx to NOT terminate the SSL connection, rather have it passthrough to the backend servers:

redirect_http.conf

 Redirect any http request on port 80 to https

server {
  listen        80;

  server_name   _;

  return 301 https://$host$request_uri;
}

passthrough.stream

# https://gerco.dev/NGINX-Reverse-Proxy-with-TLS-Passthrough/


map $ssl_preread_server_name $name {
		test1.myapps.local  server1_https;
		test2.myapps.local  server2_https;
		default $ssl_preread_server_name;
}


upstream server1_https {
		server service1:443; //---------> Since I've linked the containers in the compose file, this is valid
}

upstream server2_https {
		server service2:443; //---------> Since I've linked the containers in the compose file, this is valid
}


server {
		listen 443;
		listen [::]:443;
		ssl_preread on;
		proxy_ssl_server_name on;
		# proxy_ssl_session_reuse off;
		proxy_pass $name;

}

I have set up the /etc/hosts file as below:

192.168.1.50  test1.myapp.local test2.myapp.local

The problem being:

when I access test1.myapp.local --> service1's page gets rendered.
when I access test2.myapp.local --> service1's page STILL gets rendered.

I'm hosting 2 subdomains in the same IP. And each time, no matter which of the two URLs I visit, I always end up at the first service.

How can I fix this? My understanding is that $ssl_preread_server_name is supposed to tell me the domain I am visiting? Is the fact that I'm using a wildcard alertnate_name in the cert to blame somehow?

Thanks.

@kekru
Copy link
Author

kekru commented Sep 10, 2020

Hi @bluefangs
your config looks good to me, I would expect that it works.

I don't think that the cert's content is relevant here. Nginx only looks into the SNI of the TCP handshake, the cert should not be relevant

Maybe you should try get some more logging out of nginx

@darc1
Copy link

darc1 commented Oct 21, 2020

@kekru Cool stuff, do you know if dtls passthrough is available as well?

@kekru
Copy link
Author

kekru commented Oct 21, 2020

Never heard of DTLS before ^^ When googling, I only find this experimental description
http://nginx.org/patches/dtls/README.txt
Maybe that works

@pathardepavan
Copy link

I have a k8s cluster with multiple backend services. For one of the services I am trying to do TLS forward where as for other services we can terminate the TLS in the nginx it self. Have installed nginx ingress helm chart and given SSL termination argument as true. We have only one entry point via nginx for services. How do I achieve sni in case of k8s services ? Can some one help ?

@kekru
Copy link
Author

kekru commented Jan 7, 2021

@pathardepavan
I did not configure it for k8s yet, but I think SSL Passthrough is what you are looking for. It seems to be disabled by default in nginx ingress.
If nginx does not work for you, you could maybe replace it with Traefik

@SuzukiHonoka
Copy link

Thanks for sharing!

@kevprice83
Copy link

Is there a way to use keepalive to reuse TLS connections to the upstream if multiple upstreams are resolved to a single IP. The problem I am seeing is I have one upstream block with multiple servernames, this becomes a problem if the upstream host is configured with passthrough because the connection is reused by Nginx based on IP and you will be routed to the wrong upstream sometimes. I can't see any way to work around this without somehow creating a mapping of upstreams & IPs that don't only depend on hostname.

@maganuk
Copy link

maganuk commented Jul 5, 2021

Hi @kekru, do you know if this will work with the ssl_client_certificate directive as well? We are trying to configure the path of the client certificate file dynamically.

Thanks for sharing!

@kekru
Copy link
Author

kekru commented Jul 5, 2021

Hi @maganuk I did not try it myself, but this approach looks very generic. If it works for ssl_certificate and ssl_certificate_key, it should work for ssl_client_certificate, too

@amigthea
Copy link

amigthea commented Jul 9, 2021

Hi @kekru,

I have the following question:

I have a couple of HTTPS services that are running inside docker containers. I have an nginx container also set up so that it redirects URL to the relevant containers. Each of the https services in the respective containers use a certificate with a wildcard dns. In my case:

[ alternate_names ]

DNS.1        = *.myapps.local

I have configured nginx to NOT terminate the SSL connection, rather have it passthrough to the backend servers:

redirect_http.conf

 Redirect any http request on port 80 to https

server {
  listen        80;

  server_name   _;

  return 301 https://$host$request_uri;
}

passthrough.stream

# https://gerco.dev/NGINX-Reverse-Proxy-with-TLS-Passthrough/


map $ssl_preread_server_name $name {
		test1.myapps.local  server1_https;
		test2.myapps.local  server2_https;
		default $ssl_preread_server_name;
}


upstream server1_https {
		server service1:443; //---------> Since I've linked the containers in the compose file, this is valid
}

upstream server2_https {
		server service2:443; //---------> Since I've linked the containers in the compose file, this is valid
}


server {
		listen 443;
		listen [::]:443;
		ssl_preread on;
		proxy_ssl_server_name on;
		# proxy_ssl_session_reuse off;
		proxy_pass $name;

}

I have set up the /etc/hosts file as below:

192.168.1.50  test1.myapp.local test2.myapp.local

The problem being:

when I access test1.myapp.local --> service1's page gets rendered.
when I access test2.myapp.local --> service1's page STILL gets rendered.

I'm hosting 2 subdomains in the same IP. And each time, no matter which of the two URLs I visit, I always end up at the first service.

How can I fix this? My understanding is that $ssl_preread_server_name is supposed to tell me the domain I am visiting? Is the fact that I'm using a wildcard alertnate_name in the cert to blame somehow?

Thanks.

I am EXACTLY in your same situation (but with 3 backend servers), as soon as I started to use a wildcard certificate, this problem shows up. Did you ever solved?

To add some more information:

Proxy configuration:

[...]

stream {

  map $ssl_preread_server_name $upstream {
    default          0.0.0.0;
    example.com      upstream_example;
    a.example.com    upstream_a_example;
    b.example.com    upstream_b_example;

  }

  server {
    listen 10.1.1.10:443;
    ssl_preread on;
    proxy_protocol on;
    proxy_connect_timeout 5s;
    proxy_pass $upstream;
  }

  upstream upstream_example {
    server 10.1.1.11:443;
  }

  upstream upstream_a_example {
    server 10.1.1.12:443;
  }

  upstream upstream_b_example {
    server 10.1.1.13:443;
  }

}

Backend servers:

example.com

  server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name:443$request_uri;
}

  server {
    listen 443;
    server_name example.com;
    ssl_certificate /path/to/wildcard_cert_fullchain.pem;
    ssl_certificate_key /path/to/wildcard_cert_privkey.pem;
    [...]
}

a.example.com


  server {
    listen 80;
    server_name a.example.com;
    return 301 https://$server_name:443$request_uri;
}

  server {
    listen 443;
    server_name a.example.com;
    ssl_certificate /path/to/wildcard_cert_fullchain.pem;
    ssl_certificate_key /path/to/wildcard_cert_privkey.pem;
    [...]
}

b.example.com

  server {
    listen 80;
    server_name b.example.com;
    return 301 https://$server_name:443$request_uri;
}

  server {
    listen 443;
    server_name b.example.com;
    ssl_certificate /path/to/wildcard_cert_fullchain.pem;
    ssl_certificate_key /path/to/wildcard_cert_privkey.pem;
    [...]
}
  • The first backend server I visit is displayed correctly but after that, the other two redirect me to the first one
  • This beheaviour last for about 10 minutes, if I go idle in that time and then refresh browser page it connects me correctly to the url that was giving me the wrong page but then again, if I try to connect to the other two website shortly after that, it redirect me back to the last one working
  • Cleaning browser cache "reset" that, so I can visit each backend server before those 10 minutes if I clean the browser cache between each visit
  • I got the the directive proxy_timeout to the default value (10 minutes) and if I set it up to 3 seconds, as in the original post, the problem is bypassed somehow but that generates me other problems (terminates early some connections that needs time to complete, do not redirect if connections happen in those 3 seconds etc)
  • I tcpdumped the connections and SNI is working as intended: it couldn't be otherwise because if I use a normal certificates, tls sessions are redirected correctly to each backend server

It really don't makes sense to me what the wildcard certificates has to do with that

@amigthea
Copy link

amigthea commented Jul 9, 2021

what I know for sure is that the sni routing is working:

a) it works with the normal certificates
b) it works with the wildcard certificates if I clear the browser cache in between the trials

so it's like that the proxy server or the backend webservers are caching all three ssl session (to example.com, to a.example.com and to b.example.com) with the same *.example.com domain expressed in the wildcard certificate and this is creating conflict. I do not know how that's possible and can't find anything useful from nginx documentation

@kekru, do you know where I can look for?

edit:

I found the culprit: on the backend servers side i had the listen directive configured with http2 (other than ssl and proxy_protocol) and as far as I can understand in my actual overeuphoric state of mind, http2 is not supported by the proxy_pass directive: deleting that now all works flawlessly

hope that this can help you @bluefangs

@kekru
Copy link
Author

kekru commented Jul 9, 2021

Hi @Azertooth thanks for sharing. Good to know that it does not work for upstream http2

@alturismo
Copy link

alturismo commented Jul 26, 2021

Hi, may some questions if this would be even possible

i have various http reverse proxy server rules running, so far so good, now, if i would like to extend it to lets say as sample rdp through reverse proxy, would that be possible in some way.

sample code in nginx.conf (not working) while nginx is running on port 80 and 443 on ip 192.168.1.83

stream {

map $ssl_preread_server_name $name {
    rdp1.mydomain.de rdp1_backend;
    rdp2.mydomain.de rdp2_backend;
    default https_default_backend;
}

upstream rdp1_backend {
    server 192.168.1.210:3389;
}

upstream rdp2_backend {
    server 192.168.1.215:3389;
}

upstream https_default_backend {
    server 192.168.1.83:443;
}

server {
    listen 192.168.1.83:444;
    proxy_pass $name;
    ssl_preread on;
}

}

so, i d like to have rdp1.mydomain.de forwarding rdp or rdp2.mydomain.de to another client, while all other http requests should end on the RP with its own http server blocks ... but i actually cant find if either its possible to use rdp through this or howto route to general RP when no map matches the rule.

what makes me wonder when i see alot of online samples, most have stream server also listening on 443 while default is also 443 ... when i do this im getting the (obvious) error 443 in use ...

for any tipps, thanks ahead.

@kevprice83
Copy link

Is there a way to use keepalive to reuse TLS connections to the upstream if multiple upstreams are resolved to a single IP. The problem I am seeing is I have one upstream block with multiple servernames, this becomes a problem if the upstream host is configured with passthrough because the connection is reused by Nginx based on IP and you will be routed to the wrong upstream sometimes. I can't see any way to work around this without somehow creating a mapping of upstreams & IPs that don't only depend on hostname.

@kekru any ideas on my comment above? Any help or guidance would be appreciated.

@kekru
Copy link
Author

kekru commented Aug 1, 2021

@alturismo As RDP (Remote Desktop Protocol) is based on TCP directly (and not HTTP), the routing by domain name can only work via server name indication (SNI), so you need "non terminating, TLS pass through". So the "ssl_preread on;" in your example is correct and your other config looks good, too.
You also need to be sure, that your RDP client will send the SNI in the TLS handshake. See also this question. But it's not very easy to find out, if it sends the SNI.
When looking at wikipedia RDP has also a UDP based port. As far as I know UDP has no SNI support (or anything like that). If you make the TCP port working, you get the next problem with UDP

@kevprice83 Sorry no idea

@alturismo
Copy link

@kekru thanks for the answer, i d say i drop this experiment then and stay on apache guac for rdp instead, thanks for taking a look

@MichaelVoelkel
Copy link

Hi. Thanks for the instructions. Any idea what could be wrong at my side? $ssl_server_name is filled first, at 2nd, 3rd,... request empty though :( "nginx reload" fills it again but only for one request

@libDarkstreet
Copy link

Well. It doesn't work for me. I keep getting "SSL routines:ssl3_read_bytes:application data after close notify" errors.
Here is my config:

stream {
    upstream dns {
        zone dns 64k;
        server dns-server:53 fail_timeout=7s;
        server dns-server2:53 fail_timeout=7s;
        server dns-server3:53 fail_timeout=7s;
    }

    upstream filtered {
        zone filtered 64k;
        server filtered1:53 fail_timeout=5s;
        server filtered2:53 fail_timeout=5s;
        server filtered3:53 fail_timeout=5s;
    }
   
    map $ssl_server_name $servers {
        dot.domain.com       dns;
        ad-block.domain.com  filtered;
    }

    server {
        listen              853 ssl;
        listen              [::]:853 ssl;
        proxy_pass $servers;
        ssl_protocols            TLSv1.2 TLSv1.3;
        ssl_ciphers              ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
        ssl_handshake_timeout    7s;
        ssl_session_cache        shared:SSL:20m;
        ssl_session_timeout      4h; 

        # SSL
        ssl_certificate         Path
        ssl_certificate_key     Path

    }
}

@libDarkstreet
Copy link

Okay. I solved the problem. I replaced NGiNX with HAproxy. :DD

@jdasari-msft
Copy link

@libDarkstreet , i have a similar requirement, where the clients(on-prem) request would go through a SNIP proxy to the Public cloud. I need a way to configure the proxy be only SNIP (Transparent Forward Pass through) where it only reads through the Hello Packet and routes it based on the SNI. in otherwors, a Non-terminating SNI Proxy. Can it be achieved by HA proxy or SQIUD or Nginx? if so can you please forward me a sample configuration or any reference where i can look into? thanks, TIA.

@AeroNotix
Copy link

Hey thanks for this!

@vinnie357
Copy link

vinnie357 commented Feb 6, 2023

I had to add the "hostnames;" directive to the maps to support the regex for wildcards.
without it only default would match.
This is not in the docs/examples for preread : http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
but are mentioned here: https://nginx.org/en/docs/stream/ngx_stream_map_module.html#map
Found it here: https://serverfault.com/questions/1023756/nginx-stream-map-with-wildcard

  map $ssl_preread_server_name $targetBackend {
    hostnames;
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
    .sub.mydomain.com  upstream3.example.com:443;
    .sub.sub.mydomain.com upstream4.example.com:443;
    default updsteam0.example.com:443;
  } 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment