My current hosting set up uses GlusterFS to create a shared filesystem that all my docker hosts can use. The shared filesystem is used to host the upload folder in my WordPress installations.
The cool thing about GlusterFS is that it is a true cluster file share. It is installed on all nodes and makes all files available everywhere. So if any node crashes it does not affect the others.
But, this level of redundancy comes with a price. It is more difficult to maintain and it increases the needed storage space as the amount of nodes increase.
So I opted for the easy solution and switched to EFS.Continue reading “Switching from GlusterFS to Amazon EFS”
For my projects, I need a generic website setup that I can reuse for multiple projects.
I want to try out the following setup. A frontend build in Vue served as static files from Amazon S3. A backend built with .net core 2.1 as a REST API presented with Swagger. Finally, using Googles firebase authentification for login requirements.
Since I need a baseline platform for multiple projects, it needs to be generic enough to allow me to reuse the setup. Most of my projects need a similar setup with a frontend exposed to anonymous users and a backend dashboard which requires authentification.
In this article, I am going to cover how I set up the Vue frontend. In later articles, I will cover the authentification and the backend.
Continue reading “Scalable baseline website setup with authentification and VueJS, Amazon S3 and .net core 2.1”
Real men do not take backups, but they cry a lot
But I rather not cry too much 🙂 I try to have a good backup solution. After all, I do spend an awful lot of time creating data; it would hurt a lot if it were lost by accident. Especially since a backup is easy to set up and cheap.
Amazon S3 is my go-to solution for cloud data storage. It is designed never to lose data and to be resilient to disasters. On top of that, it is cheap.
In this article, I dive into what you need to know about Amazon S3 before you start using it for your backup solution.
Continue reading “Amazon S3 backup strategy”
We do not log in to our servers every day to check how the resource usage is. Just like with uptime monitoring we need a system to help us monitor if everything is inside reasonable limits so we can scale the servers if required. And detect any potential problem before it becomes a problem.
In this article, I will explore how to set up monitoring using, Docker, influxdb, grafana, cAdvisor, and fluentd.
Continue reading “Docker setup monitoring”
In my continuous effort to make my setup as redundant as possible, the next step is to add a load balancer. I ran into a few problems while setting it up, allowing me to share my experience.
- It is not possible to have the apex record of a domain point to a CNAME.
- Moving the domain names of a service that runs HTTPS requires great care.
I added a network load balancer to sit in front of my Docker hosts to allow them to be fall over for each other. After that, I moved datadriven-investment.com to www.datadriven-investment.com because of the problem with apex DNS record mentioned above.
Continue reading “AWS load balancing Docker hosts and pain with HTTPS”
To improve the load time from the previous article, we must look to caching. I have always been fascinated by technology that allows us to serve pages very VERY fast. So in this article, I am going to explore a few different options for making WordPress load faster using caching.
It is not feasible to make software like WordPress load in less than 100ms, just loading the front page on this blog takes around 400ms which is already fast for a WordPress site. So we need a caching system in front of it to improve the load time.
Continue reading “WordPress load times below 100ms”
This article extends the setup explained in the previous article.
Briefly, the setup consists of a load balancer, an HTTP server, and a PHP-fpm backend, all running in a Docker Swarm environment as explained here.
Previously the load balancer was bound to the manager node in the Docker swarm because it needed access to the Let’s Encrypt certificate files. To prepare for a fully replicated and fault tolerant design, this needs to be fixed so it can run from any node.
Because of the mesh network in Docker swarm, the load balancer does not need to run on the manager node where the external IP is bound. It can run on any host; the mesh network will route the request to the right container. But that requires us to replicate the Let’s Encrypt certificates and make sure they can be renewed and reloaded independently of which host the load balancer is running. This article explains how I changed that and moved renewal into Docker.
Continue reading “Let’s Encrypt selfcontained inside Docker”
There are many reasons for running a website on HTTPS instead of the regular HTTP. One reason is that Google Chrome soon will start to mark HTTP sites as insecure, possibly spooking your visitors. It is also a signal to your visitors that the communication between them and your website is protected.
In this article, I will describe how to set up Let’s Encrypt which provides free HTTPS certificates. It is part of a continuous effort to make the setup, described in the earlier articles, best-practice. I also offer some background information about HTTPS certificates for the interested reader.
Continue reading “Setting up HTTPS on Nginx using Let’s Encrypt”
Apparently, performance on a website is essential. Slow sites are a pain for all visitors, and often slow sites put excessive load on the servers as well. To improve the performance of a website, a few tools will help us to pinpoint areas to enhance. It is a combination of settings in Nginx and WordPress. It builds on the setup described in this article but you can use the advice standalone. Most of the optimizations are useful on any web platform, not just WordPress, and Nginx.
Continue reading “Nginx and WordPress performance optimization 78% load time improvement”
This article builds on the platform described in the last seven parts, a WordPress setup running on AWS using Docker. In this article, we will look into how to improve uptime and scalability for the service by replicating it across multiple servers. To allow for replication, several challenges need to be solved. How to handle this is covered in this article, including comments on a few problems I found like Docker nodes running out of memory and how to fix it. And network problems in Docker swarm mode.
Continue reading “Docker setup – part 8: GlusterFS and Docker on multiple servers”