Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
nginx TLS SNI routing, based on subdomain pattern

Nginx TLS SNI routing, based on subdomain pattern

Nginx can be configured to route to a backend, based on the server's domain name, which is included in the SSL/TLS handshake (Server Name Indication, SNI).
This works for http upstream servers, but also for other protocols, that can be secured with TLS.

prerequisites

  • at least nginx 1.15.9 to use variables in ssl_certificate and ssl_certificate_key.
  • check nginx -V for the following:
    ...
    TLS SNI support enabled
    ...
    --with-stream_ssl_module 
    --with-stream_ssl_preread_module

It works well with the nginx:1.15.9-alpine docker image.

non terminating, TLS pass through

Pass the TLS stream to an upstream server, based on the domain name from TLS SNI field. This does not terminate TLS.
The upstream server can serve HTTPS or other TLS secured TCP responses.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
  }   
 
  server {
    listen 443; 
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
    
    proxy_pass $targetBackend;       
    ssl_preread on;
  }
}

terminating TLS, forward TCP

Terminate TLS and forward the plain TCP to the upstream server.

stream {  

  map $ssl_server_name $targetBackend {
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
  }

  map $ssl_server_name $targetCert {
    ab.mydomain.com /certs/server-cert1.pem;
    xy.mydomain.com /certs/server-cert2.pem;
  }

  map $ssl_server_name $targetCertKey {
    ab.mydomain.com /certs/server-key1.pem;
    xy.mydomain.com /certs/server-key2.pem;
  }
  
  server {
    listen 443 ssl; 
    ssl_protocols       TLSv1.2;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
      
    proxy_pass $targetBackend;
  } 
}

Choose upstream based on domain pattern

The domain name can be matched by a regex pattern, and extracted to variables. See regex_names.
This can be used to choose a backend/upstream based on the pattern of a (sub)domain. This is inspired by robszumski/k8s-service-proxy.

The following configuration extracts a subdomain into variables and uses them to create the upstream server name.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ~^(?<app>.+)-(?<namespace>.+).mydomain.com$ $app-public.$namespace.example.com:8080;
  }
  ...
}

Your Nginx should be reachable over the wildcard subdomain *.mydomain.com.
A request to shop-staging.mydomain.com will be forwarded to shop-public.staging.example.com:8080.

K8s service exposing by pattern

In Kubernetes, you can use this to expose all services with a specific name pattern.
This configuration exposes all service which names end with -public.
A request to shop-staging-9999.mydomain.com will be forwarded to shop-public in the namespace staging on port 9999.
You will also need to update the resolver, see below.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ~^(?<service>.+)-(?<namespace>.+)-(?<port>.+).mydomain.com$ $service-public.$namespace.svc.cluster.local:$port;
  }
  
  server {
    ...
    resolver kube-dns.kube-system.svc.cluster.local;
    ...
  }
}
@razorRun

This comment has been minimized.

Copy link

@razorRun razorRun commented Aug 16, 2019

Do we need to have an SSL certificate on ab.mydomain.com in this case?

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Aug 16, 2019

Hi razorRun, yes you need a certificate on the nginx, when using "terminating TLS, forward TCP".

If you use "non terminating, TLS pass through", then you will need the certificate on the backend server, but not on the nginx.

In both cases the certificate must match ab.mydomain.com

@razorRun

This comment has been minimized.

Copy link

@razorRun razorRun commented Aug 17, 2019

Kekru Thanks, mate for the reply

I have a small clarification if you don't mind
What I want to do is
vpn1.app.com ─┬─► nginx at 10.0.0.1 ─┬─► vpn1 at another-server-1
vpn2.app.com ─┤ ├─► vpn2 at another-server-2
wildcard.app.com ─┘ ─► Y

I have a wildcard(all the subdomains) pointed to a nginx server

The thing is I want to do a dynamic mapping and am not in control of another-server-X. If a client asks for a subdomain(not fixed) I will have to have a lookup table and map subdomain to the expected external server.

So do I have to add a SSL cert to my nginx server. will it give the SNI? I am getting a blank "-" at the moment.

Any help will be really handy.
Thanks in advance

Current Config

stream {
log_format basic '$remote_addr [$time_local] '
'$ssl_preread_server_name'
'$protocol $status $bytes_sent $bytes_received '
'$session_time';

access_log  /var/log/nginx/access.log basic;
error_log  /var/log/nginx/error.log debug;

#I will dynamically update the map section
map $ssl_preread_server_name $targetBackend {
sample.mydomain.com 32.23.232.32:3431;
xy.mydomain.com 44.23.342.32:3431;
}
server {
listen 80;
proxy_pass $targetBackend;
ssl_preread on;
proxy_connect_timeout 1s;
proxy_timeout 3s;
resolver 1.1.1.1;

}

}

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Aug 19, 2019

Hi razorRun,

  1. Does "vpn1 at another-server-1" have a certificate? If no, then you need a certificate at your nginx.
    Same for "vpn2 at another-server-2"
  2. The SNI comes from the client, during the TLS handshake between client and nginx.
  3. It is not guaranteed, that all clients send an SNI, but most HTTPS clients should do, including all browsers.
  4. Which protocol are you using, between client and nginx? HTTPS or another? If another, be sure, that it is build on top of TCP+TLS. Otherwise it won't work.
  5. Your current config listens on port 80. If your protocol is HTTPS, be sure to explicitly write it in your browser's address line, or your browser will send HTTP by default. Then you don't have an SNI. SNI is only for TLS secured connections.
@paravz

This comment has been minimized.

Copy link

@paravz paravz commented Jan 13, 2020

"terminating TLS, forward TCP" can be extended by adding a ssl_preread listener with stream ssl listeners as backends.

This way each stream tls listener can have unique ssl configuration, not just parametrized ssl certs. This makes it possible to enable mutual tls for some clients based on available ssl_preread variables http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html#variables, ie $ssl_preread_server_name

@totayma

This comment has been minimized.

Copy link

@totayma totayma commented Feb 2, 2020

how can use this for proxy google.com ? for example run dns server and make A record google.com >>> 1.2.3.4 and this server(1.2.3.4) run nginx proxy to origin google.com 172.217.23.142

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Feb 3, 2020

@totayma Should work, I think. Only problem will be, that you don't have a valid TLS cert for google.com

@rayray221

This comment has been minimized.

Copy link

@rayray221 rayray221 commented Apr 18, 2020

@kekru is it possible to have multiple configurations for terminating TLS on the same reverse proxy? I currently have a https upstream server defined and working, terminating at the proxy. I would like to add another server to the configuration, that would terminate on the backend server. I have tried adding a stream directive with a similar configuration as the above to my config, but nginx seems to ignore it and routes it to my default site anyway. Is this even possible? Any help is greatly appreciated!

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented May 11, 2020

Hi @rayray221,
sorry for the late answer. Did you already find a solution?

Do I understand correctly? You have one nginx. And you want to configure at the same time:

  • non terminating, TLS pass through for hello.example.com
  • terminating TLS, forward TCP for world.example.com

I'm not a super nginx expert, but I think it will work, but not on the same port.
I think you need something like this:

stream {  
   
   ... # content see above, in the artice
   ...

   server { # non terminating, TLS pass through
    listen 443; 
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
    
    proxy_pass $targetBackend; 
    ssl_preread on;
  }
  
  server { # terminating TLS, forward TCP
    # here another port
    listen 8443 ssl; 
    ssl_protocols       TLSv1.2;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
      
    proxy_pass $targetBackend;
  } 
}

As you see, I choose two different ports (443 and 8443)
But, I'm not sure, I did not test this

@rayray221

This comment has been minimized.

Copy link

@rayray221 rayray221 commented May 11, 2020

Thanks @kekru. I ended up terminating both on the reverse proxy. To be honest, the reason I wanted to have multiple terminations was more out of laziness than need. I had an existing Nginx web server that was terminating already, and was adding an additional website under the same IP. The existing one has a bit of a complicated setup and I didn't want to spend the time troubleshooting it. All in all, I did get this working and it is all terminating at my reverse proxy which honestly is the desired configuration anyway.

Appreciate your response!

@LeonDragon

This comment has been minimized.

Copy link

@LeonDragon LeonDragon commented Jun 24, 2020

I am a newbie in this part. But I have a naive question: This configuration is anything related to the DNS server? or just for the webserver (i.e., VPS) only. Let say, I am not configuring anything in the DNS server (such as subdomain) and my domain is www.example.com. The server is configured to "extract a subdomain into variables" as your code.

  • When a client request a content at abc.example.com. What is the DNS server behavior? the DNS server will pass through to the designate webserver with the metadata [abc.example.com] for the server, and the server will know which part of the website, the client wants to connect?

Thank you in advance!

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Jun 24, 2020

Hi @LeonDragon,
your DNS server must return the IP of your webserver, wenn abc.example.com is requested.

For example, if your webserver's IP is 11.22.33.44:

  • You can use a wildcard A record in your DNS: A *.example.com 11.22.33.44 (I prefer this)
  • Or a simple A record: A abc.example.com 11.22.33.44

Then your webserver gets the domain name abc.example.com in the HTTP Host header.
It can extract parts of the domain and knows which parts of the website it should show, or which other server it should delegate to

@LeonDragon

This comment has been minimized.

Copy link

@LeonDragon LeonDragon commented Jun 25, 2020

Thank @kekru, I understand now :)

@blackandred

This comment has been minimized.

Copy link

@blackandred blackandred commented Aug 27, 2020

Good post, thanks.

@bluefangs

This comment has been minimized.

Copy link

@bluefangs bluefangs commented Sep 9, 2020

Hi @kekru,

I have the following question:

I have a couple of HTTPS services that are running inside docker containers. I have an nginx container also set up so that it redirects URL to the relevant containers. Each of the https services in the respective containers use a certificate with a wildcard dns. In my case:

[ alternate_names ]

DNS.1        = *.myapps.local

I have configured nginx to NOT terminate the SSL connection, rather have it passthrough to the backend servers:

redirect_http.conf

 Redirect any http request on port 80 to https

server {
  listen        80;

  server_name   _;

  return 301 https://$host$request_uri;
}

passthrough.stream

# https://gerco.dev/NGINX-Reverse-Proxy-with-TLS-Passthrough/


map $ssl_preread_server_name $name {
		test1.myapps.local  server1_https;
		test2.myapps.local  server2_https;
		default $ssl_preread_server_name;
}


upstream server1_https {
		server service1:443; //---------> Since I've linked the containers in the compose file, this is valid
}

upstream server2_https {
		server service2:443; //---------> Since I've linked the containers in the compose file, this is valid
}


server {
		listen 443;
		listen [::]:443;
		ssl_preread on;
		proxy_ssl_server_name on;
		# proxy_ssl_session_reuse off;
		proxy_pass $name;

}

I have set up the /etc/hosts file as below:

192.168.1.50  test1.myapp.local test2.myapp.local

The problem being:

when I access test1.myapp.local --> service1's page gets rendered.
when I access test2.myapp.local --> service1's page STILL gets rendered.

I'm hosting 2 subdomains in the same IP. And each time, no matter which of the two URLs I visit, I always end up at the first service.

How can I fix this? My understanding is that $ssl_preread_server_name is supposed to tell me the domain I am visiting? Is the fact that I'm using a wildcard alertnate_name in the cert to blame somehow?

Thanks.

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Sep 10, 2020

Hi @bluefangs
your config looks good to me, I would expect that it works.

I don't think that the cert's content is relevant here. Nginx only looks into the SNI of the TCP handshake, the cert should not be relevant

Maybe you should try get some more logging out of nginx

@darc1

This comment has been minimized.

Copy link

@darc1 darc1 commented Oct 21, 2020

@kekru Cool stuff, do you know if dtls passthrough is available as well?

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Oct 21, 2020

Never heard of DTLS before ^^ When googling, I only find this experimental description
http://nginx.org/patches/dtls/README.txt
Maybe that works

@pathardepavan

This comment has been minimized.

Copy link

@pathardepavan pathardepavan commented Jan 6, 2021

I have a k8s cluster with multiple backend services. For one of the services I am trying to do TLS forward where as for other services we can terminate the TLS in the nginx it self. Have installed nginx ingress helm chart and given SSL termination argument as true. We have only one entry point via nginx for services. How do I achieve sni in case of k8s services ? Can some one help ?

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Jan 7, 2021

@pathardepavan
I did not configure it for k8s yet, but I think SSL Passthrough is what you are looking for. It seems to be disabled by default in nginx ingress.
If nginx does not work for you, you could maybe replace it with Traefik

@SuzukiHonoka

This comment has been minimized.

Copy link

@SuzukiHonoka SuzukiHonoka commented Mar 6, 2021

Thanks for sharing!

@kevprice83

This comment has been minimized.

Copy link

@kevprice83 kevprice83 commented Apr 22, 2021

Is there a way to use keepalive to reuse TLS connections to the upstream if multiple upstreams are resolved to a single IP. The problem I am seeing is I have one upstream block with multiple servernames, this becomes a problem if the upstream host is configured with passthrough because the connection is reused by Nginx based on IP and you will be routed to the wrong upstream sometimes. I can't see any way to work around this without somehow creating a mapping of upstreams & IPs that don't only depend on hostname.

@maganuk

This comment has been minimized.

Copy link

@maganuk maganuk commented Jul 5, 2021

Hi @kekru, do you know if this will work with the ssl_client_certificate directive as well? We are trying to configure the path of the client certificate file dynamically.

Thanks for sharing!

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Jul 5, 2021

Hi @maganuk I did not try it myself, but this approach looks very generic. If it works for ssl_certificate and ssl_certificate_key, it should work for ssl_client_certificate, too

@Azertooth

This comment has been minimized.

Copy link

@Azertooth Azertooth commented Jul 9, 2021

Hi @kekru,

I have the following question:

I have a couple of HTTPS services that are running inside docker containers. I have an nginx container also set up so that it redirects URL to the relevant containers. Each of the https services in the respective containers use a certificate with a wildcard dns. In my case:

[ alternate_names ]

DNS.1        = *.myapps.local

I have configured nginx to NOT terminate the SSL connection, rather have it passthrough to the backend servers:

redirect_http.conf

 Redirect any http request on port 80 to https

server {
  listen        80;

  server_name   _;

  return 301 https://$host$request_uri;
}

passthrough.stream

# https://gerco.dev/NGINX-Reverse-Proxy-with-TLS-Passthrough/


map $ssl_preread_server_name $name {
		test1.myapps.local  server1_https;
		test2.myapps.local  server2_https;
		default $ssl_preread_server_name;
}


upstream server1_https {
		server service1:443; //---------> Since I've linked the containers in the compose file, this is valid
}

upstream server2_https {
		server service2:443; //---------> Since I've linked the containers in the compose file, this is valid
}


server {
		listen 443;
		listen [::]:443;
		ssl_preread on;
		proxy_ssl_server_name on;
		# proxy_ssl_session_reuse off;
		proxy_pass $name;

}

I have set up the /etc/hosts file as below:

192.168.1.50  test1.myapp.local test2.myapp.local

The problem being:

when I access test1.myapp.local --> service1's page gets rendered.
when I access test2.myapp.local --> service1's page STILL gets rendered.

I'm hosting 2 subdomains in the same IP. And each time, no matter which of the two URLs I visit, I always end up at the first service.

How can I fix this? My understanding is that $ssl_preread_server_name is supposed to tell me the domain I am visiting? Is the fact that I'm using a wildcard alertnate_name in the cert to blame somehow?

Thanks.

I am EXACTLY in your same situation (but with 3 backend servers), as soon as I started to use a wildcard certificate, this problem shows up. Did you ever solved?

To add some more information:

Proxy configuration:

[...]

stream {

  map $ssl_preread_server_name $upstream {
    default          0.0.0.0;
    example.com      upstream_example;
    a.example.com    upstream_a_example;
    b.example.com    upstream_b_example;

  }

  server {
    listen 10.1.1.10:443;
    ssl_preread on;
    proxy_protocol on;
    proxy_connect_timeout 5s;
    proxy_pass $upstream;
  }

  upstream upstream_example {
    server 10.1.1.11:443;
  }

  upstream upstream_a_example {
    server 10.1.1.12:443;
  }

  upstream upstream_b_example {
    server 10.1.1.13:443;
  }

}

Backend servers:

example.com

  server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name:443$request_uri;
}

  server {
    listen 443;
    server_name example.com;
    ssl_certificate /path/to/wildcard_cert_fullchain.pem;
    ssl_certificate_key /path/to/wildcard_cert_privkey.pem;
    [...]
}

a.example.com


  server {
    listen 80;
    server_name a.example.com;
    return 301 https://$server_name:443$request_uri;
}

  server {
    listen 443;
    server_name a.example.com;
    ssl_certificate /path/to/wildcard_cert_fullchain.pem;
    ssl_certificate_key /path/to/wildcard_cert_privkey.pem;
    [...]
}

b.example.com

  server {
    listen 80;
    server_name b.example.com;
    return 301 https://$server_name:443$request_uri;
}

  server {
    listen 443;
    server_name b.example.com;
    ssl_certificate /path/to/wildcard_cert_fullchain.pem;
    ssl_certificate_key /path/to/wildcard_cert_privkey.pem;
    [...]
}
  • The first backend server I visit is displayed correctly but after that, the other two redirect me to the first one
  • This beheaviour last for about 10 minutes, if I go idle in that time and then refresh browser page it connects me correctly to the url that was giving me the wrong page but then again, if I try to connect to the other two website shortly after that, it redirect me back to the last one working
  • Cleaning browser cache "reset" that, so I can visit each backend server before those 10 minutes if I clean the browser cache between each visit
  • I got the the directive proxy_timeout to the default value (10 minutes) and if I set it up to 3 seconds, as in the original post, the problem is bypassed somehow but that generates me other problems (terminates early some connections that needs time to complete, do not redirect if connections happen in those 3 seconds etc)
  • I tcpdumped the connections and SNI is working as intended: it couldn't be otherwise because if I use a normal certificates, tls sessions are redirected correctly to each backend server

It really don't makes sense to me what the wildcard certificates has to do with that

@Azertooth

This comment has been minimized.

Copy link

@Azertooth Azertooth commented Jul 9, 2021

what I know for sure is that the sni routing is working:

a) it works with the normal certificates
b) it works with the wildcard certificates if I clear the browser cache in between the trials

so it's like that the proxy server or the backend webservers are caching all three ssl session (to example.com, to a.example.com and to b.example.com) with the same *.example.com domain expressed in the wildcard certificate and this is creating conflict. I do not know how that's possible and can't find anything useful from nginx documentation

@kekru, do you know where I can look for?

edit:

I found the culprit: on the backend servers side i had the listen directive configured with http2 (other than ssl and proxy_protocol) and as far as I can understand in my actual overeuphoric state of mind, http2 is not supported by the proxy_pass directive: deleting that now all works flawlessly

hope that this can help you @bluefangs

@kekru

This comment has been minimized.

Copy link
Owner Author

@kekru kekru commented Jul 9, 2021

Hi @Azertooth thanks for sharing. Good to know that it does not work for upstream http2

@alturismo

This comment has been minimized.

Copy link

@alturismo alturismo commented Jul 26, 2021

Hi, may some questions if this would be even possible

i have various http reverse proxy server rules running, so far so good, now, if i would like to extend it to lets say as sample rdp through reverse proxy, would that be possible in some way.

sample code in nginx.conf (not working) while nginx is running on port 80 and 443 on ip 192.168.1.83

stream {

map $ssl_preread_server_name $name {
    rdp1.mydomain.de rdp1_backend;
    rdp2.mydomain.de rdp2_backend;
    default https_default_backend;
}

upstream rdp1_backend {
    server 192.168.1.210:3389;
}

upstream rdp2_backend {
    server 192.168.1.215:3389;
}

upstream https_default_backend {
    server 192.168.1.83:443;
}

server {
    listen 192.168.1.83:444;
    proxy_pass $name;
    ssl_preread on;
}

}

so, i d like to have rdp1.mydomain.de forwarding rdp or rdp2.mydomain.de to another client, while all other http requests should end on the RP with its own http server blocks ... but i actually cant find if either its possible to use rdp through this or howto route to general RP when no map matches the rule.

what makes me wonder when i see alot of online samples, most have stream server also listening on 443 while default is also 443 ... when i do this im getting the (obvious) error 443 in use ...

for any tipps, thanks ahead.

@kevprice83

This comment has been minimized.

Copy link

@kevprice83 kevprice83 commented Jul 27, 2021

Is there a way to use keepalive to reuse TLS connections to the upstream if multiple upstreams are resolved to a single IP. The problem I am seeing is I have one upstream block with multiple servernames, this becomes a problem if the upstream host is configured with passthrough because the connection is reused by Nginx based on IP and you will be routed to the wrong upstream sometimes. I can't see any way to work around this without somehow creating a mapping of upstreams & IPs that don't only depend on hostname.

@kekru any ideas on my comment above? Any help or guidance would be appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment