At DockerCon EU 18 I held a tech talk about monitoring Docker containers in swarm mode. Since the talk was only 20 minutes, it was not possible to cover all the interesting detail. This article provides some additional information and a tutorial for setting up a simple monitoring infrastructure for swarm mode. You can jump directly to the tutorial here.
We do not log in to our servers every day to check how the resource usage is. Just like with uptime monitoring we need a system to help us monitor if everything is inside reasonable limits so we can scale the servers if required. And detect any potential problem before it becomes a problem.
In my continuous effort to make my setup as redundant as possible, the next step is to add a load balancer. I ran into a few problems while setting it up, allowing me to share my experience.
It is not possible to have the apex record of a domain point to a CNAME.
Moving the domain names of a service that runs HTTPS requires great care.
I added a network load balancer to sit in front of my Docker hosts to allow them to be fall over for each other. After that, I moved datadriven-investment.com to www.datadriven-investment.com because of the problem with apex DNS record mentioned above.
This article extends the setup explained in the previous article.
Briefly, the setup consists of a load balancer, an HTTP server, and a PHP-fpm backend, all running in a Docker Swarm environment as explained here.
Previously the load balancer was bound to the manager node in the Docker swarm because it needed access to the Let’s Encrypt certificate files. To prepare for a fully replicated and fault tolerant design, this needs to be fixed so it can run from any node.
Because of the mesh network in Docker swarm, the load balancer does not need to run on the manager node where the external IP is bound. It can run on any host; the mesh network will route the request to the right container. But that requires us to replicate the Let’s Encrypt certificates and make sure they can be renewed and reloaded independently of which host the load balancer is running. This article explains how I changed that and moved renewal into Docker.
Your deployment environment is a tool that should be sharpened to allow maximum productivity. I have seen many developers where their deployment environment is less than optimal, hurting their productivity.
Ranging from, developing directly on production sites. To develop on a shared server. And finally using a local development environment, which I think is the most optimal way to do it.
Developing directly on production is “fast”, but remember the quote
If you develop directly in production, the business will be all over you when you break things, and you will break things! So take the time to get a setup that allows you to go fast in the future.
A local development environment has many advantages and with Docker, it is easy to set up. I will show how I handle the setup for my development. Including tip for how you can take advantage of a local development environment.