Docker setup – part 6: redundant php-fpm server

Redundant PHP-fpm service

Now the service stack has a load balancer, redundant Nginx web servers, but the PHP-fpm server is still only a single service. Most of the processing happens in the PHP-fpm server when serving a page request. The part prohibiting us from replicating the PHP-fpm service is that session data is stored in the local filesystem on the server. So if we just replicated the PHP-fpm server without replicating the sessions it would not work.

In this part, we will do the necessary changes to support a redundant PHP-fpm service.

In standard PHP the session data are stored in a local file on the same server that PHP is running. When a browser accessed a page the session data is loaded from the file and made available to the PHP script. If we load balance between multiple PHP-fpm servers the content in the session is not available.

To fix this we need a way to make the session data available across servers. Memcache or Redis are popular choices. Both options do what I need, but Redis seems a more popular choice and has a larger feature set. Redis provides us with a key-value store, where PHP can save the serialized session data as the value, and use the session identifier as the key. The communication from PHP-fpm to Redis is using standard TCP and is a central storage. This allows any of the replicated PHP-fpm servers to access the same session data.

What about WordPress?

This extension of the setup is actually not needed for my WordPress setup since it does not use sessions. This is a very clever design choice by the WordPress people since it makes caching much easier to handle.

What about locking?

With standard PHP file sessions, when the function session_start() is run, PHP ads a file lock while processing the request. When the request is complete the lock is removed. More info here. The effect of this is that only one request that uses the session can be processed at the same time. This gives the advantage that the session will not be corrupted if two requests save the session data to the same file at the same time. It also gives the advantage that if the first request changes data in the session, the next request will not be allowed to read the file before this data is saved, avoiding race conditions.

When the session handler is changed to Redis this lock cannot be maintained. So we get more throughput for multiple requests that can be processed simultaneous, but it opens up for race conditions.

Changes to setup

The setup changes are minimal actually. First I needed to add an extra config file to the PHP-fpm server that changes the session handler to Redis

session.save_handler = redis
session.save_path = "tcp://redis:6379"
session.gc_maxlifetime = 86400

PHP also needs the Redis module to be able to communicate with Redis. So I added the needed extra commands in the Dockerfile to install this extension.

RUN apk add --no-cache --virtual .build-deps \
    pcre-dev ${PHPIZE_DEPS} g++ make autoconf; \
    pecl install redis-3.1.6 && docker-php-ext-enable redis

Then I added the Redis service to the docker-compose.yml file

redis:
  image: redis:4.0.8-alpine

And made two replications of the PHP-fpm service

php:
  image: 637345297332.dkr.ecr.eu-west-1.amazonaws.com/patch-php-fpm:latest
  build: php-fpm
  deploy:
    replicas: 2

When the new service stack is started it will load balance in round-robin fashion between each of the PHP-fpm instances.

You can download the full project here.

Next steps

I’m getting very pleased with the setup. Making the changes so the PHP-fpm service is replicates was a pleasure to work with, both docker and bitbucket pipelines are very nice to work with. I have two large concerns left on the setup.

  1. There is no monitoring of the service – if it fails there is no automatic notification.
  2. When updating the services the sites will be down while the update is running. I would like to scale this to a docker swarm that spans multiple servers so the services can be updated without any downtime.

 


Also published on Medium.