-
-
Save sirsquidness/710bc76d7bbc734c7a3ff69c6b8ff591 to your computer and use it in GitHub Desktop.
# This config came around after a friend had problems with a Steam cache on his | |
# Cox internet connection. Cox would intercept any requests to Steam content | |
# servers and return a 302 to Cox's servers. The cache would return the 302 | |
# to the Steam client, and the Steam client would go directly to Cox, bypassing | |
# the cache. | |
# This config makes nginx follow the 302 itself, and caches the result of the | |
# redirect as if it was the response to the original request. So subsequent | |
# requests to the URL that returned a 302 will get the file instead of a 302. | |
proxy_cache_path /cache keys_zone=steam:100m levels=1:2 inactive=100d max_size=1000g; | |
server { | |
listen 80; | |
charset utf-8; | |
client_max_body_size 75M; | |
# main cache block - when upstream responds with a 302, it's caught by | |
# error_page and passed off to the (nearly identical) @handle_redirects | |
location / { | |
proxy_pass http://web; | |
proxy_cache steam; | |
proxy_cache_key $uri; | |
proxy_cache_valid 200 206 3000h; | |
proxy_intercept_errors on; | |
error_page 301 302 307 = @handle_redirects; | |
} | |
location @handle_redirects { | |
#store the current state of the world so we can reuse it in a minute | |
# We need to capture these values now, because as soon as we invoke | |
# the proxy_* directives, these will disappear | |
set $original_uri $uri; | |
set $orig_loc $upstream_http_location; | |
# nginx goes to fetch the value from the upstream Location header | |
proxy_pass $orig_loc; | |
proxy_cache steam; | |
# But we store the result with the cache key of the original request URI | |
# so that future clients don't need to follow the redirect too | |
proxy_cache_key $original_uri; | |
proxy_cache_valid 200 206 3000h; | |
} | |
} |
Thank you, this forks fine.
I had to add resolver
directive to @handle_redirects
location like @ypujante, otherwise I had "no resolver defined" error.
Hi. I made a load balancer using Upstream. as follows.
*
*
*
upstream appservers {
least_conn;
server srv1.domain.com:443;
server srv2.domain.com:443;
}
*
*
*
server {
listen 443 ssl http2;
server_name domain.com:443;
location /hls {
proxy_pass https://appservers/hls;
}
}
*
*
*
I set nginx.conf similar to above.
It's working fine now, but the main server domain.com:443 is heavily affected by live streams. How can I 302 forward this traffic to the edge servers?
So I want to direct every incoming request as a link to the end servers.
domain.com/hls/test.m3u8 > srv1.domain.com/hls/test.m3u8
domain.com/hls/test.m3u8 > srv2.domain.com/hls/test.m3u8
*
*
As. I want to distribute live streams to upstream servers.
Thanks.
@stopperbir Hello, did you find any solution for this?
@Zonimi Unfortunately I still haven't found it. I tried many things but it doesn't work.
This gist got me 90% of the way there, but somehow the redirect was still being cached, I think. The suggestion of adding
proxy_ignore_headers Set-Cookie;
proxy_ignore_headers Cache-Control;
after error_page 301 302 307 = @handle_redirect;
has worked for me. I tried to use proxy_no_cache $http_location
, proxy_no_cache $sent_http_location
or proxy_no_cache $upstream_http_location
to prevent the redirect itself from being cached, but that statement was apparently being ignored. I would love to understand why this is needed but I don't know how to start debugging it.
Some troubleshooting notes while trying to proxy
services.gradle.org
(and locally cache requested binaries):$upstream_http_location
was "").Main
proxy_cache_path /data/cache levels=1:2 keys_zone=my_cache:10m max_size=60g inactive=100000m use_temp_path=off;
Server: