Create a gist now

Instantly share code, notes, and snippets.

nginx microcaching config example
# Set cache dir
proxy_cache_path /var/cache/nginx levels=1:2
keys_zone=microcache:5m max_size=1000m;
# Virtualhost/server configuration
server {
listen 80;
server_name yourhost.domain.com;
# Define cached location (may not be whole site)
location / {
# Setup var defaults
set $no_cache "";
# If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie
if ($request_method !~ ^(GET|HEAD)$) {
set $no_cache "1";
}
# Drop no cache cookie if need be
# (for some reason, add_header fails if included in prior if-block)
if ($no_cache = "1") {
add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
add_header X-Microcachable "0";
}
# Bypass cache if no-cache cookie is set
if ($http_cookie ~* "_mcnc") {
set $no_cache "1";
}
# Bypass cache if flag is set
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache;
# Point nginx to the real app/web server
proxy_pass http://appserver.domain.com;
# Set cache zone
proxy_cache microcache;
# Set cache key to include identifying components
proxy_cache_key $scheme$host$request_method$request_uri;
# Only cache valid HTTP 200 responses for 1 second
proxy_cache_valid 200 1s;
# Serve from cache if currently refreshing
proxy_cache_use_stale updating;
# Send appropriate headers through
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Set files larger than 1M to stream rather than cache
proxy_max_temp_file_size 1M;
# Custom logging
log_format custom '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" nocache:$no_cache';
access_log /var/log/nginx/microcache.log custom;
}
}
@ghost

question regarding:

If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie

why "mark user as uncacheable for 1 second via cookie"?

@chaucerbao

Probably for when the user creates a resource (like a Post), and is redirected to the collection of Posts afterwards, you'll want the list to include the new Post.

@Panthro

why this?

       # Only cache valid HTTP 200 responses for 1 second
        proxy_cache_valid 200 1s;
@fhsocca

@Panthro
there you put for how long nginx will reply with the cached response (for that response code, here 200). In this case each 1 second it will refresh the contents from upstream and start replying with the new content for another second and so on. It could be 1m, 1h, 1d, it all depends on how dynamic is your page. Even 1 second is enough to face a really burst rush of requests. That is the whole point of micro-caching.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment