Skip to content

Instantly share code, notes, and snippets.

@shrikeh

shrikeh/blog.md Secret

Last active March 6, 2024 19:10
Show Gist options
  • Save shrikeh/aaef0fb3e03cef79e8cbf41a0d5fb8dd to your computer and use it in GitHub Desktop.
Save shrikeh/aaef0fb3e03cef79e8cbf41a0d5fb8dd to your computer and use it in GitHub Desktop.

Creating a performant docker sandwich for PHP

A long time ago, I wrote about how to share the PHP-FPM Unix socket with nginx in docker. It was pointed out that you do lose load balancing with this approach, because the two docker containers share a volume.

Were this the only point of segregation in our application, this would cause significant issues to scale. At which point, the performance gain and security of a Unix socket is vastly outgunned by the inability to granularly scale parts of your container infrastructure - a core reason why we want to use containers in the first place.

All is not lost though. We can keep the two docker containers linked, while providing other places to scale our app and improve performance.

Creating from scratch

Let’s start off by creating a test project called sandwich . I’m going to create a very simple app, so I won’t use Symfony or similar, and because this will be “Hello World” levels of complexity, let’s just assume you’re using Atom:

➜  Workspace> mkdir sandwich
➜  Workspace> cd sandwich
➜  sandwich> atom ./

For this example, we’re going to create three directories:

dist, which is what we’re going to assume your outputted frontend assets are going to end up. app, which we assume is your PHP application. docker, which is where we put all our configuration for our containers.

Here’s your one-liner to copy:

➜  sandwich> mkdir dist && mkdir app && mkdir docker

Fill dist with whatever images of cats and memes you want for testing this.

Now let’s create a simple docker-compose.yml in the root of our project:

---
#./docker-compose.yml
version: "3.7"

services:
  nginx-frontend:
    build:
      context: ./
      dockerfile: docker/nginx-frontend/Dockerfile
    ports:
      - 80:8080
    volumes:
      - ./dist:/dist:ro
      - socket:/socket
      - type: tmpfs
        target: /var/cache/nginx
      - type: tmpfs
        target: /var/run/nginx

Now we’ll put the Dockerfile for this in docker/nginx-frontend/. We’ll setup:

#./docker/nginx-frontend/Dockerfile

ARG NGINX_TAG="1.17.7-alpine"

FROM nginx:${NGINX_TAG} as frontend

Note that we are using Docker’s ARG feature to give us configurable builds, with sensible working defaults: we can pass a different tag with NGINX_TAG, as part of docker-compose’s args key.

All being well, when we run docker-compose up, we should see it build the nginx container and we can see the default nginx page at http://localhost.

This container is going to be for our static assets, so we mount the directory /dist which will contain images, CSS and Javascript. Let’s override the default server configuration for nginx by creating a new default.conf in our docker/nginx-frontend folder:

➜  sandwich> mkdir -p docker/nginx-frontend/conf.d
➜  sandwich> touch docker/nginx-frontend/conf.d/default.conf

Now let’s edit this file. The theory of operation here is fairly simple; the frontend serves static files, and anything else is a 404, which it is going to send upstream to our application. This means that the frontend can be heavily optimised for static content alone, yet simplified by not needing to know any details related to dynamic content.

Let’s start by creating an upstream block for our frontend app:

# docker/nginx-frontend/conf.d/default.conf
upstream app {
  server backend-cache:8080;
}

There’s more configuration we could add here for timeouts and retry attempts, but we’ll keep it simple. Similarly, we could now have the frontend act as TLS termination layer, which would be ideal. For now, we will just have the site run plain old HTTP, with just some minor tweaks for static assets Let’s add a server block directly beneath the upstream block:

server {
  listen 80 default_server;
  server_name localhost _;

  location / {
    # Because we only provide a root under a location block,   
    # we can be fairly certain that we can’t accidentally
    # expose anything that shouldn’t be public.
    root   /dist;
    location ~* \.(jpg||png|gif|ico|css|js|pdf|svg|html)$ {
      limit_except GET {
        deny all;
      }
      # Some minor improvements for static assets
      expires 7d;
      access_log off;
      sendfile on;
      sendfile_max_chunk  1m;
      tcp_nopush  on;
   }
   # Anything we can’t find, send to the 404 error handler.
   try_files $uri /$uri 404;
  }

  # Set the 404 error handler to be the application
  error_page 404 = @app;
  error_page 405 = @error405;

  location @error405 {
    # Conditionally add the Allow header if we get 405.
    add_header Allow "GET, POST, HEAD" always;
  }

  # Proxy all to the “app” backend
  location @app {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_pass http://app;
  }
}

Now let’s append to our Dockerfile to copy the directory into place:

COPY docker/nginx-frontend/conf.d /etc/nginx/conf.d

Now, any static asset that exists in /dist will be served directly from nginx, and anything else will be proxied upstream. So what should be upstream? Well, our premise here is that we need performance, and since anything not static is by definition dynamic, we should insert a caching layer - in this case, Varnish:

# docker-compose.yml
services:
  # ...
  backend-cache:
    build:
      context: ./docker/backend-cache
      dockerfile: Dockerfile
    volumes:
      # Add these to grant the container tmpfs storage for
      # writing cache
      - type: tmpfs
        target: /var/lib/varnish:exec
      - type: tmpfs
        target: /usr/local/var/varnish:exec
    expose:
      - 80
    networks:
      - frontend_web
      - backend_app
    depends_on:
      - nginx-backend
# docker/backend-cache/Dockerfile
ARG VARNISH_TAG="6.3"

FROM varnish:${VARNISH_TAG}

COPY ./default.vcl /etc/varnish/
# docker/backend-cache/default.vcl
vcl 4.1;

backend default {
  .host = "nginx-backend";
  .port = "8081";
}

Hooray! Now we have a very simple (perhaps too simple) caching layer that is going to talk to our next layer: the nginx backend. There are loads of tweaks that we can apply via Varnish, though luckily, with our configuration, we can ignore those to do with static files. So let’s move on to configuring our dynamic site:

# docker-compose.yml
services:
  # ...
  nginx-backend:
    build:
      context: ./
      dockerfile: docker/nginx-backend/Dockerfile
    volumes:
      - socket:/socket
    networks:
      - backend_app
    expose:
      - 8081
volumes:
  socket:

And our config file is now very PHP-specific, because we can be sure only calls to dynamic pages or 404s get to here:

upstream fpm {
  server unix:///socket/app.sock;
}

server {
  listen 8081 default_server;
  server_name localhost _;
  index index.html;

  location / {
       try_files $uri /index.php$is_args$args;
  }

  location ~ ^/index\.php(/|$) {
    try_files /dev/null @php;
  }

   error_page 404 = @php;

  location @php {
    root /site/app;
    fastcgi_split_path_info ^(.+?\.php)(/.*)$;
    # include the fastcgi_param settings
    include fastcgi.params;

    fastcgi_pass fpm;
  }
}

I’ll skip over creating the fastcgi.params file for now and concentrate on setting up the shared group name and GID for nginx and php-fpm in the Dockerfile:

# docker/nginx-backend/Dockerfile
ARG NGINX_TAG="1.17.7-alpine"

FROM nginx:${NGINX_TAG} as backend
ARG APP_GROUP_ID=2001
ARG APP_GROUP_NAME="app"

RUN addgroup --system --gid ${APP_GROUP_ID} ${APP_GROUP_NAME}
RUN addgroup nginx ${APP_GROUP_NAME}

COPY docker/nginx-backend/conf.d /etc/nginx/conf.d
COPY docker/nginx-backend/fastcgi.params /etc/nginx/

Finally, we need to have PHP-FPM listening to requests on a socket. There are three files the Docker version of PHP-FPM has in conf.d - two to configure specifically for Docker, and the config for the application. We only need to edit these slightly:

; docker/php-fpm/php-fpm.d/www.conf
...
; Comment out listening on a port
;listen = 9000 
listen = /socket/app.sock
listen.owner = www-data
listen.group = app
listen.mode = 0660

Also, in zz-docker.conf, remove or comment out this line:

[www]
; remove or comment out the listening on a port
;listen = 9000 

Finally, let’s have our PHP-FPM Dockerfile import these into the container on build:

# docker/php-fpm/Dockerfile
ARG PHP_TAG="7.4.1-fpm-alpine3.11"
FROM php:${PHP_TAG} as php-fpm

ARG APP_GROUP_ID=2001
ARG APP_GROUP_NAME="app"

RUN addgroup -g ${APP_GROUP_ID} -S "${APP_GROUP_NAME}"

COPY docker/php-fpm/php-fpm.d /usr/local/etc/php-fpm.d

Tying it all together, let’s edit our docker-compose.yml so that PHP-FPM uses the same shared volume for the socket as the nginx backend, and that the nginx backend is on the same network as varnish, but not the same network as the frontend instance of nginx:

---
version: "3.7"

services:
  nginx-frontend:
    build:
      context: ./
      dockerfile: docker/nginx-frontend/Dockerfile
    ports:
      - 80:8080
    volumes:
      - ./dist:/dist:ro
      - type: tmpfs
        target: /var/cache/nginx
      - type: tmpfs
        target: /var/run/nginx
    depends_on:
      - backend-cache
    networks:
      - frontend_web
  backend-cache:
    build:
      context: ./docker/backend-cache
      dockerfile: Dockerfile
    volumes:
      - type: tmpfs
        target: /var/lib/varnish:exec
      - type: tmpfs
        target: /usr/local/var/varnish:exec
    expose:
      - 80
    networks:
      - frontend_web
      - backend_app
    depends_on:
      - nginx-backend
  nginx-backend:
    build:
      context: ./
      dockerfile: docker/nginx-backend/Dockerfile
    volumes:
      - socket:/socket
    networks:
      - backend_app
    expose:
      - 8081
    depends_on:
      - php-fpm
  php-fpm:
    build:
      context: ./
      dockerfile: docker/php-fpm/Dockerfile
    volumes:
      # Make the application directory read only for security
      - ./app:/site/app:ro
      - socket:/socket
volumes:
  socket:
networks:
  frontend_web:
  backend_app:

We are pretty good to go. Let’s create a very simple PHP script to test everything is wired up and that Varnish is caching correctly:

<?php 
// app/index.php
echo time();

Let’s rock!

➜  sandwich> docker-compose up

All being well, we can now visit http://localhost and see a timestamp. Hooray! But wait! If you press refresh, it doesn’t change. Checking the log output of docker-compose, we can see that nginx-backend isn’t even getting traffic - so Varnish is fully responsible for service the page.

There are pros and cons to this approach: we have several layers to our app, which can lead to additional complexity. Nevertheless, each layer itself is fairly straightforward, with a clear separation of concerns. However, by sharing a volume for the socket, we’ve locked the backend nginx to PHP-FPM. This has potential concerns for scaling, as orchestration tools such as Kubernetes will have to keep these two containers on the same server and in a 1:1 ratio. But as in front of it are two separately scalable layers - both of which can accept a lot of traffic that normally only one instance of nginx would handle - we have several places to scale our application. Locking the web server to the PHP interpreter in this way is not massively different cognitively from if we were using Apache with mod_php.

You’re mileage, as they say, may vary. I’ve provided a slightly-tweaked version of this tutorial https://github.com/shrikeh/example-php-nginx-varnish-docker if you’d like to clone and have a play.

Enjoy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment