Skip to content

Instantly share code, notes, and snippets.

@kekru
Last active May 27, 2024 21:49
Show Gist options
  • Save kekru/c09dbab5e78bf76402966b13fa72b9d2 to your computer and use it in GitHub Desktop.
Save kekru/c09dbab5e78bf76402966b13fa72b9d2 to your computer and use it in GitHub Desktop.
nginx TLS SNI routing, based on subdomain pattern

Nginx TLS SNI routing, based on subdomain pattern

Nginx can be configured to route to a backend, based on the server's domain name, which is included in the SSL/TLS handshake (Server Name Indication, SNI).
This works for http upstream servers, but also for other protocols, that can be secured with TLS.

prerequisites

  • at least nginx 1.15.9 to use variables in ssl_certificate and ssl_certificate_key.
  • check nginx -V for the following:
    ...
    TLS SNI support enabled
    ...
    --with-stream_ssl_module 
    --with-stream_ssl_preread_module

It works well with the nginx:1.15.9-alpine docker image.

non terminating, TLS pass through

Pass the TLS stream to an upstream server, based on the domain name from TLS SNI field. This does not terminate TLS.
The upstream server can serve HTTPS or other TLS secured TCP responses.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
  }   
 
  server {
    listen 443; 
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
    
    proxy_pass $targetBackend;       
    ssl_preread on;
  }
}

terminating TLS, forward TCP

Terminate TLS and forward the plain TCP to the upstream server.

stream {  

  map $ssl_server_name $targetBackend {
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
  }

  map $ssl_server_name $targetCert {
    ab.mydomain.com /certs/server-cert1.pem;
    xy.mydomain.com /certs/server-cert2.pem;
  }

  map $ssl_server_name $targetCertKey {
    ab.mydomain.com /certs/server-key1.pem;
    xy.mydomain.com /certs/server-key2.pem;
  }
  
  server {
    listen 443 ssl; 
    ssl_protocols       TLSv1.2;
    ssl_certificate     $targetCert;
    ssl_certificate_key $targetCertKey;
        
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    resolver 1.1.1.1;
      
    proxy_pass $targetBackend;
  } 
}

Choose upstream based on domain pattern

The domain name can be matched by a regex pattern, and extracted to variables. See regex_names.
This can be used to choose a backend/upstream based on the pattern of a (sub)domain. This is inspired by robszumski/k8s-service-proxy.

The following configuration extracts a subdomain into variables and uses them to create the upstream server name.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ~^(?<app>.+)-(?<namespace>.+).mydomain.com$ $app-public.$namespace.example.com:8080;
  }
  ...
}

Your Nginx should be reachable over the wildcard subdomain *.mydomain.com.
A request to shop-staging.mydomain.com will be forwarded to shop-public.staging.example.com:8080.

K8s service exposing by pattern

In Kubernetes, you can use this to expose all services with a specific name pattern.
This configuration exposes all service which names end with -public.
A request to shop-staging-9999.mydomain.com will be forwarded to shop-public in the namespace staging on port 9999.
You will also need to update the resolver, see below.

stream {  

  map $ssl_preread_server_name $targetBackend {
    ~^(?<service>.+)-(?<namespace>.+)-(?<port>.+).mydomain.com$ $service-public.$namespace.svc.cluster.local:$port;
  }
  
  server {
    ...
    resolver kube-dns.kube-system.svc.cluster.local;
    ...
  }
}
@kevprice83
Copy link

Is there a way to use keepalive to reuse TLS connections to the upstream if multiple upstreams are resolved to a single IP. The problem I am seeing is I have one upstream block with multiple servernames, this becomes a problem if the upstream host is configured with passthrough because the connection is reused by Nginx based on IP and you will be routed to the wrong upstream sometimes. I can't see any way to work around this without somehow creating a mapping of upstreams & IPs that don't only depend on hostname.

@kekru any ideas on my comment above? Any help or guidance would be appreciated.

@kekru
Copy link
Author

kekru commented Aug 1, 2021

@alturismo As RDP (Remote Desktop Protocol) is based on TCP directly (and not HTTP), the routing by domain name can only work via server name indication (SNI), so you need "non terminating, TLS pass through". So the "ssl_preread on;" in your example is correct and your other config looks good, too.
You also need to be sure, that your RDP client will send the SNI in the TLS handshake. See also this question. But it's not very easy to find out, if it sends the SNI.
When looking at wikipedia RDP has also a UDP based port. As far as I know UDP has no SNI support (or anything like that). If you make the TCP port working, you get the next problem with UDP

@kevprice83 Sorry no idea

@alturismo
Copy link

@kekru thanks for the answer, i d say i drop this experiment then and stay on apache guac for rdp instead, thanks for taking a look

@MichaelVoelkel
Copy link

Hi. Thanks for the instructions. Any idea what could be wrong at my side? $ssl_server_name is filled first, at 2nd, 3rd,... request empty though :( "nginx reload" fills it again but only for one request

@libDarkstreet
Copy link

Well. It doesn't work for me. I keep getting "SSL routines:ssl3_read_bytes:application data after close notify" errors.
Here is my config:

stream {
    upstream dns {
        zone dns 64k;
        server dns-server:53 fail_timeout=7s;
        server dns-server2:53 fail_timeout=7s;
        server dns-server3:53 fail_timeout=7s;
    }

    upstream filtered {
        zone filtered 64k;
        server filtered1:53 fail_timeout=5s;
        server filtered2:53 fail_timeout=5s;
        server filtered3:53 fail_timeout=5s;
    }
   
    map $ssl_server_name $servers {
        dot.domain.com       dns;
        ad-block.domain.com  filtered;
    }

    server {
        listen              853 ssl;
        listen              [::]:853 ssl;
        proxy_pass $servers;
        ssl_protocols            TLSv1.2 TLSv1.3;
        ssl_ciphers              ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
        ssl_handshake_timeout    7s;
        ssl_session_cache        shared:SSL:20m;
        ssl_session_timeout      4h; 

        # SSL
        ssl_certificate         Path
        ssl_certificate_key     Path

    }
}

@libDarkstreet
Copy link

Okay. I solved the problem. I replaced NGiNX with HAproxy. :DD

@jdasari-msft
Copy link

@libDarkstreet , i have a similar requirement, where the clients(on-prem) request would go through a SNIP proxy to the Public cloud. I need a way to configure the proxy be only SNIP (Transparent Forward Pass through) where it only reads through the Hello Packet and routes it based on the SNI. in otherwors, a Non-terminating SNI Proxy. Can it be achieved by HA proxy or SQIUD or Nginx? if so can you please forward me a sample configuration or any reference where i can look into? thanks, TIA.

@AeroNotix
Copy link

Hey thanks for this!

@vinnie357
Copy link

vinnie357 commented Feb 6, 2023

I had to add the "hostnames;" directive to the maps to support the regex for wildcards.
without it only default would match.
This is not in the docs/examples for preread : http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
but are mentioned here: https://nginx.org/en/docs/stream/ngx_stream_map_module.html#map
Found it here: https://serverfault.com/questions/1023756/nginx-stream-map-with-wildcard

  map $ssl_preread_server_name $targetBackend {
    hostnames;
    ab.mydomain.com  upstream1.example.com:443;
    xy.mydomain.com  upstream2.example.com:443;
    .sub.mydomain.com  upstream3.example.com:443;
    .sub.sub.mydomain.com upstream4.example.com:443;
    default updsteam0.example.com:443;
  } 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment