This gist assumes you are migrating an existing site for www.example.com — ideally WordPress — to a new server — ideally Ubuntu Server 16.04 LTS — and wish to enable HTTP/2 (backwards compatibile with HTTP/1.1) with always-on HTTPS, caching, compression, and more. Although these instructions are geared towards WordPress, they should be trivially extensible to other PHP frameworks, other FastCGI backends, and even non-FastCGI backends (using
proxy in lieu of
fastcgi in the terminal Caddyfile stanza).
Quickstart: Use your own naked and canonical domain names instead of example.com and www.example.com and customize the Caddyfile and VCL provided in this gist to your preferences!
What's the point of all this?
Why are they so many layers?
The primary goal of this architecture is to massively and immediately improve any given WordPress site with a quick and easy upgrade to HTTP/2, always-on HTTPS, and robust caching and compression middleware. To that end, we can envision a stack that looks something like:
+-----------+ Request +-> | Dynamic | +--> Content | +----------+ +----------+ +-----------+ | | | | +-----> +-----> Cache & <--+ +-----------+ | Client | | SSL/Edge | | Compress | | <-----+ <-----+ <--+ +-----------+ +----------+ +----------+ +-----------+ | | Static | +--+ Assets | <-+ Response | | +-----------+
Varnish enables caching and compression of static assets as well as dynamic content and is the best in its class, so let's plug that in. This leaves us wanting for implementations of the following components:
- SSL termination and certificate management
- Request sanitization and XSS attack mitigation*
- Request and error logging
- IP-based whitelisting and rate-limiting
- URL rewriting (in request headers)
- URL canonicalization (in response bodies)
- Static asset serving
- Hand-off to Varnish via reverse proxy
- Hand-off to PHP-FPM via FastCGI
Caddy does (*almost) all of these things right out of the box. It makes the "happy path" very happy. And although HAProxy might be better at the edge, and NGINX might have more features, Caddy is a nice fit for this use-case. Performance is still an open question, but so far it's held up under load quite capably for me.
Our architecture starts to take shape as we fill in the boxes:
+-----------+ Request +-> | FastCGI | +--> (PHP-FPM) | +----------+ +----------+ +-----------+ | | | | +-----> +-----> <--+ +-----------+ | Client | | Caddy | | Varnish | | <-----+ <-----+ <--+ +-----------+ +----------+ +----------+ +-----------+ | | Static | +--+ Assets | <-+ Response | | +-----------+
Ah, but there's a problem: Varnish can't talk to FastCGI or serve static assets from disk. Fortunately, Caddy can do both! We just need to sandwich Varnish between two Caddy virtual hosts:
+-----------+ Request +-> | FastCGI | +--> (PHP-FPM) | +----------+ +---------+ +-----------+ +---------+ | | | | +---> +---> +---> <--+ +-----------+ | Client | | Caddy | | Varnish | | Caddy | | <---+ <---+ <---+ <--+ +-----------+ +----------+ +---------+ +-----------+ +---------+ | | Static | +--+ Assets | <-+ Response | | +-----------+ |--------Distribution---------|--------Origination----------|
This should (hopefully) illustrate why we need so many different layers. It may help to think of the left side of the stack as content distrbution, and the right side of the stack as content origination. Keeping these two halves separate makes it easy to scale when you're ready by letting you swap in a content distribution network (CDN)—which is essentially caching and compression as a service with anycast and SSL termination—while keeping the same origin:
Hypothetical Future State ========================= +-----------+ Request +-> | FastCGI | +--> (PHP-FPM) | +----------+ +----------+ +---------+ | | | | +---> +---> <--+ +-----------+ | Client | | CDN | | Caddy | | <---+ <---+ <--+ +-----------+ +----------+ +----------+ +---------+ | | Static | +--+ Assets | <-+ Response | | +-----------+ |-Distribution-|--------Origination----------|
Alternatively, if you don't go the CDN route, you have the ability to add more Caddy, Varnish, and/or PHP-FPM instances distributed and load-balanced across multiple servers.
Caching of Static Assets
It is no longer en vogue to cache static assets in memory, the logic being that
sendfile(2) and the filesystem cache can deliver files from disk faster than a userland process can copy bytes from memory. Unfortunately, you still have to deal with gzipping uncompressed content (or gunzipping compressed content) depending on the Accept-Encoding of the client, so
sendfile(2) can't be used in all cases; only when dealing with "naturally" compressed static assets like images and videos. In a more perfect world, you might want an architecture that looks something like this:
+------------+ Request +-> | Dynamic | +--> Content | +----------+ +------------+ +-----------+ | | | | +-----> +-----> Cache & <--+ +------------+ | Client | | SSL/Edge | | Compress | | <-----+ <-----+ <--+ +------------+ +----------+ +-----^------+ +-----------+ | | Un- | | | | Compressed | <-+ Response | +--+ Static | +-----+------+ | Assets | | Compressed | +------------+ | Static | | Assets | +------------+
Unfortunately, Caddy doesn't give us a great way to say "serve already-compressed files from disk (if present) and send all other requests upstream," so we're more-or-less stuck with the architecture described above. Maybe a future version of Caddy's
proxy directive will support an
ext subdirective and this will become an easier problem to solve: we'd be able to send only requests for e.g.
Meanwhile, more critically, static assets are often large enough to push dynamic content out of cache, forcing expensive re-renders and largely defeating the purpose of this architecture. If you have a lot (i.e. more than 100 MB) of static assets, it may be worth partitioning your Varnish backend storage into multiple "stevedores" to prevent this from happening. To do so you'll need to modify your Varnish service unit file (e.g. by creating
/etc/systemd/system/varnish.service.d/override.conf) to give your
malloc backend storage a name (e.g. "dynamic") and declare a new, named
file storage backend (e.g. "static"). Here's what mine looks like:
[Service] Type=Simple ExecStart= ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F \ -a localhost:6081 -T localhost:6082 \ -f /etc/varnish/default.vcl -S /etc/varnish/secret \ -s dynamic=malloc,256m -s static=file,/var/lib/varnish,2048m
Don't forget to uncomment the relevant lines in the VCL provided in this gist. Register the updated service unit with
systemctl daemon-reload and restart Varnish with
systemctl restart varnish. Now your static assets will be cached on disk, which should be a performance wash, and precious in-memory cache space will be reserved for dynamic content.
Alternatively, you can simply tell Varnish not to cache such files and pass or pipe them through to Caddy. In my benchmarking, I've found the solution documented above to be significantly (up to 2X) faster than this strategy.
Follow these steps to configure your server.
Set up your firewall, minimally:
# ufw disable # ufw reset # ufw limit OpenSSH # ufw allow to any port 80,443 proto tcp # ufw enable
Make a directory for your WordPress installation (e.g.
/var/www/example.com/wordpress) and log files (e.g.
/var/www/example.com/logs). Your Caddyfile can go in this directory structure (e.g.
/var/www/example.com/Caddyfile) or you can build a Caddyfile "repo" somewhere in
/etc/caddy) and go nuts with the
import directive if you have multiple sites.
Follow the normal process to migrate your WordPress site to its new home directory and restore the database. These steps are well-documented elsewhere and so are not covered here. (You'll need to install MySQL/MariaDB to migrate the database, but don't install Apache, NGINX, PHP, or any other services; we'll cover that below!)
varnish from apt. Copy
default.vcl from this gist into
/etc/varnish. It should work as-is in this basic configuration, and is essentially a blank slate for future customization using the expressive Varnish Configuration Language.
Be sure to install the appropriate PHP extension(s) for your database; for example,
php7.0-mysql for MySQL/MariaDB. I found I needed to install
php7.0-mbstring as well. YMMV. Don't forget to reload the
php7.0-fpm service whenever you add/remove extensions.
Start and enable the
varnish services. Optionally stop and disable the
You'll need to write a service unit file and install it to
/etc/systemd/system; one generated by antoiner77.caddy v2.0.1 is included in this gist for reference. (If you're already familiar with Ansible, this galaxy role is great for downloading and installing Caddy on your host.)
You'll need to decide how Caddy will acquire its SSL certificate. If you don't already have an SSL certificate for the site and can afford a few minutes of downtime, the easiest method by far is to simply point the domain name at the new server, wait for the TTL to expire, and let Caddy's Automatic HTTPS feature apply the http-01 and/or tls-sni-01 challenge(s) at startup. Be warned that you might get temporarily blocked by Let's Encrypt if something isn't configured correctly and it looks like you're spamming their systems.
Copy the Caddyfile from this gist to wherever you've decided to keep it and customize it for your site:
- Set the canonical domain name for each stanza.
- Set the email address for your Let's Encrypt account (Caddy will create the account for you).
- Decide whether you want to support older TLS 1.0 clients or not.
- Update directory paths to reflect your WordPress installation and log directories.
- Choose log rotation strategies for
- Update the
ipfilterblock with your allowed IP address range(s).
- If new user signup is open to the public, remove /wp-login.php from the
- If you need XML-RPC, remove or comment out the statements dealing with X-Pingback and xmlrpc.php.
- Adjust the filter rule to correctly rewrite your non-canonical URLs to the canonical URL. For example, rewrite http://example.org to https://www.example.com.
- Adjust the caching policy for (truly) static assets.
Start and enable the
caddy service when all's said and done.
Most WordPress optimization guides suggest unsetting cache control headers such as Cache-Control, ETag, Last-Modified, Expires, etc. Instead, we're going to set them and let Varnish serve posts, pages, and attachments as slowly-changing dynamic content. Yes, you read that correctly: your WordPress site is now a glorified static site generator!
- Change your WordPress and Site Addresses to
wp-config.php, or by using the wp-cli tool.
- Apply the
wp-config.phpsnippets provided in this gist.
- Install and activate the Add Headers and Cache Control plugins.
- After the site is up-and-running, you can (and should!) browse it with the error console open to see which remote assets (e.g. analytics tracking scripts) are still being loaded via
http://, and take the appropriate action(s) to update those links to
https://. If you have a larger site, there are more efficient methods outside the scope of this guide.
- Force your adminstrator users to change their passwords over HTTPS!
If you have a lot of dynamic fragments and/or pages on your site, for example a shopping cart or PHP-based discussion forum, caching may not work correctly out of the box. Fortunately, there are ways to vary the cache based on the logged-in user's session cookie or other factors. (TODOC.) If you'd rather not deal with this part of the puzzle, simply deactivate the Add Headers and Cache Control plugins.
- The ports and Varnish instance can be shared between Caddyfiles. Caddy will select vhosts using the Host header, and Varnish will vary its cache by the Host header. (You may not want to re-use the Varnish instance if you have some gnarly, site-specific VCL, but that's another question entirely.)
- Optionally update/override the
varnish.serviceto bind to localhost, i.e.
- Don't forget to hit up Google and Bing Webmaster Tools to change the preferred URL to
https://, upload a new site map, etc.
- I've also had good luck with the HTTP Headers plugin, to add Strict-Transport-Security, X-Frame-Options, etc.
- Revisit this configuration when Caddy adds support for PROXY protocol. https://github.com/mholt/caddy/issues/852
- Revisit the :2020 stanza (don't delete the Content-Length header) when the bug(s) with
proxyare resolved. https://github.com/echocat/caddy-filter/issues/2 https://github.com/echocat/caddy-filter/issues/3
- Merge the :2020 and :2021 stanzas when the bug(s) with
fastcgiare resolved. https://github.com/echocat/caddy-filter/issues/4
- Revisit the :2020 stanza (remove note about fastcgi.logging) when the bug(s) with
fastcgiare resolved. https://github.com/echocat/caddy-filter/issues/4
- Revisit the
ipfilterblock if/when it gains the ability to whitelist/exclude file(s). https://github.com/pyed/ipfilter/issues/20
- Revisit the
ratelimitblock if/when it gains the ability to
froman ipv6 address. https://github.com/captncraig/caddy-realip/issues/5
- Is there a way to safely apply a global ratelimit?
- Will Varnish 5.0 allow us to use HTTP/2 all the way down to FastCGI?
- What more can we do to secure a typical WordPress installation, particularly .php files in wp-includes and wp-content?
- Why is this a weird, rambly gist?
- It started out as a few code snippets and grew into a huge thing. I think it should probably be a blog post, a repo, an Ansible role, etc. I'll develop it further if there's interest.