When running a blog or webshop, uptime monitoring becomes essential. We usually do not visit our own site 24/7, so we need some help to make sure we are notified if anything breaks. There exist many different tools for this. But one tool that I think solves this very cheap and easy is uptime robot, it is a hosted monitor service that monitors uptime for our site. How this work is explained in this article, and I will also touch on a few alternatives.
Redundant PHP-fpm service
Now the service stack has a load balancer, redundant Nginx web servers, but the PHP-fpm server is still only a single service. Most of the processing happens in the PHP-fpm server when serving a page request. The part prohibiting us from replicating the PHP-fpm service is that session data is stored in the local filesystem on the server. So if we just replicated the PHP-fpm server without replicating the sessions it would not work.
In this part, we will do the necessary changes to support a redundant PHP-fpm service.
In this part the original thought was to setup the php-fpm server to be redundant and fix the problem with the db backup not running. It ended up being more of a cleanup of the setup. But I did learn many things about docker in the process.
We will cover the following things
- How to remove a service from the docker swarm
- Setting up a job scheduler in docker to run the backup jobs, for both files and database
The setup needs to be able to scale better than it is capable of currently. Right now the http1 and http2 are defined as two services instead of the same service with two replicas. Also the build process is a bit slow, primarily because it needs to compile php for every build. I did not succeed with this part, but I did learn a lot about the docker build cache. More about that later. I also managed to fix a few other things that bothered me. Like the php file upload limit, and updating wordpress since v4.9.1 was released since my previous deployment.
By combining the power of docker and python I can create an analytics platform that will always run and is not dependent on the versions of python or anything other I run on my laptop.
I will show how easy it is to setup jupyter notebooks and use them for analytics. And how easy it is to publish an analysis to this blog. Docker gives me at least two benefits, 1. I can be sure that when I start my docker image the analysis will always run because docker makes sure the versions of all the software are the same. 2. If my laptop is not enough to run an analysis I can deploy the same docker image to a much more power full computer in a cloud.
This part ended up being a lot more involved and much longer than I expected. In this part my goal was to make the production deployment process smoother, make it easier to do development on the setup and prepare it to be more redundant.
Before I started, the production deploy process was to connect my IDE(phpstorm), with it’s docker-compose integration, to my remote docker and run a “docker-compose up”. This workflow has some drawbacks
- It is easy to deploy to production by error since it is only a single click in the wrong place – so I can’t be sure that the running version maps to a specific revision in my git
- When I deploy, the containers are taken down until the new containers are booted, causing service disruption
- There are no easy way to do a revert, if I deploy a setup with errors, it is a manual process to revert it to a previous version
- I would like to work with Continuous Integration / Continuous Deployment (CI/CD) as a pipeline for a more smooth workflow which requires a more tight control of the process
- I don’t have an easy way control differences between development and production
- Also I would like to prepare to work with replicated services where the service would have a fall over if an error crashes a container.
Much of this part are to prepare the setup to solve the challenges above. I did not manage to solve all of the problems up front, which disappointed me a bit, but it will be solved in later parts.
In part 2 I wanted to fix the problem with the loadbalancer from part 1. The loadbalancer did not actually function as a loadbalancer it just proxy the request to the correct webserver.
My day-job is developing and managing websites and e-commerce platforms, so I know the importance of stability and scalability on web platforms and technology in general. This is also true when working with my personal projects. For some time I have wanted to push my projects to the cloud so I can more easily utilize the scalability there. Also I only have a laptop, and I don’t want a desktop computer to take space, collect dust and consume power. Hardware also gets old and needs to be maintained. All those worries I would like to skip.
My goal is to setup a platform that will do a few things for me:
- Allow me to run my blog – this site
- Allow me to run my girlfriends blog – this site
- Be easy to update wordpress since it is known to have security issues in older versions
- Have good backup
- Alert me if/when the sites are down
- Alert me if/when the sites experience high load
- Protect the sites against ddos and similar attacks
- Provide me with a coherent platform for all my projects both web and data analysis
- Be a learning platform for new technology in cloud computing
Lofty goals but I hope to setup a platform that will service me for many years.