Dockerize Me – Several Services and LetsEncrypt – Part 3

After the basic installation of Docker and getting a WordPress blog up and running in Docker containers in part 1, this part of my Docker series will take a look at how to add two important features to the setup:

  • TLS certificates and LetsEncrypt with auto-renewal
  • How to host several websites on one server that are accessible on the same ports (80 for http and 443 for https)

I decided to extend the initial WordPress setup for two reasons. The early example in part 1 is not really suitable for live network deployment as it was missing HTTPs access. And secondly, the power of Docker is to run many independent containers on a single server. On the other hand, especially for web servers, it is important that they are all reachable from the outside on the standard web ports (80 and 443).

Serving Several Web Sites From One Server – The Classic Approach

In a non-Dockerized approach, serving different web sites on the same ports is usually done by using the ‘virtual hosts‘ function on the single web server that hosts all web sites. If three independent WordPress blogs are hosted on that server, the (single) web server would, based on the domain name given during connection establishment, decide from which sub-directory files are served and server side code is executed.

Reverse Proxies

As the Docker approach to web services is to have many independent containers, i.e. one or more containers for each WordPress blog, another approach has to be taken. One way to do this is to use a reverse (web) proxy (in a container) as a front end for port 80 and 443 and then to forward incoming requests to the containers which serve the individual websites. In this setup, 4 web servers are involved: The web server used as reverse proxy and one web server for each WordPress container setup. Without any bells and whistles, that already involves 7 containers: One for the reverse proxy and two each for the WordPress blog (web server and database).

Other Uses of Reverse Proxies

Before going on, here are a few more thoughts on reverse proxies. Content distribution networks and large websites use reverse proxies extensively for a number of tasks. For web sites with lots of traffic, web proxies can receive requests and distribute them to back end web servers where answers are generated. This makes particular sense when generating response pages is computationally intensive. While a single web proxy can forward the requests with little effort, the web servers in the back can do the heavy lifting in a work sharing fashion. Also, the reverse proxy can serve static pages straight away, which further reduces the workload of the backend servers. Another reason for using reverse proxies is to have a single entry point that can be much easier protected against outside attacks than hardening the servers in the background. Also, reverse proxies are used by Content Distribution Networks such as Cloudflare to fend off DDoS attacks to web services by putting themselves ‘in the line of fire’. The downside is, however, that the connection between the reverse proxy and the web server is not encrypted, so the CDN network provider can see all content. For some services that is quite an issue. For our project, not having encryption between the reverse proxy and the backend web servers is less of a problem, as we run the reverse proxy ourselves. So experimenting with reverse proxies is worthwhile not only for understanding Docker setups but to also get an insight into their use for other purposes.

A Reverse Proxy For Serving Different Web Sites With Docker

Let’s get back to our Docker application of serving several web sites through a single front end. This project of Evert Ramos on Github creates such a setup and links to a number of sub-projects that automate the creation of containers for WordPress and other web applications that attach to the reverse-http proxy. When you look closer you’ll see that the reverse proxy does not only consist of 1 but of 3 containers. The first container is for the reverse proxy. The other two ‘companion’ containers manage the automatic creation of new virtual host configurations for the reverse proxy web server and the automatic generation of Letsencrypt certificates when a new dockerized application becomes part of the Docker setup. In other words, a setup with 3 independent WordPress instances involves 9 containers (3 for the reverse proxy and 2 for each WordPress blog).

Trying This Out In The Public Cloud

Unlike for the previous examples I had to go and ‘rent’ a virtual server on the public cloud. This is because I already use ports 80 and 443 of my Internet connection at home for other services so I can’t use them for this setup. Using other ports is not possible, because requesting LetsEncrypt certificates triggers a request from Letsencrypt to port 80 of the server to get a proof of ownership for the domain name for which a certificate is requested. But that’s not much of a problem, as a small Ubuntu server for this purpose and a public IP address can be rented for less than 3 euros a month, for example from Hetzner.

The other thing that is required are of course domain names mapped to the IP address of the server. Fortunately, I already own a number of domains such as wirelessmoves.com, and my DNS provider makes it easy to create new subdomains and link them to an IP address with a few clicks.

Once the server and a few subdomains are in place, installing the reverse proxy via docker-compose with Evert Ramos’ project just takes a few commands:

# Install Docker and Docker-compose as described in the first part 

# Install git
apt install git

# Now clone the github repository
git clone --recurse-submodules https://github.com/evertramos/nginx-proxy-automation.git proxy 

Once git finishes a new directory is present that contains a script to get the reverse proxy running. The only things that needs to be given to the ‘fresh-start.sh‘ script are the IP address of the server and an email address. And that’s pretty much it:

cd proxy/bin
./fresh-start.sh

And that’s already it! When you run ‘docker ps’, you will see 3 Docker containers running.

O.k., so that’s the reverse proxy. To be useful, we need at least one service that uses it. So let’s use the WordPress tie-in of the project to create a WordPress instance that is accessible via the reverse proxy:

cd ~
git clone https://github.com/evertramos/docker-wordpress.git

mv docker-wordpress docker-wordpress-1
cd docker-wordpress-1

cp .env.example .env

# Change the following lines in the .env file
COMPOSE_PROJECT_NAME=new-site
CONTAINER_DB_NAME=new-site-db
CONTAINER_SITE_NAME=new-site-site
DOMAINS=domain.com,www.domain.com  # For the rev. proxy and LetsEncrypt!!!

# Note: For experimenting you can leave the DB usernames/pwds as they
# are. For production. You should obviously change them...

# Now pull all images, configure containers and start them
docker-compose up -d

And again, that’s it, the WordPress blog is up and running after half a minute or so! It takes a few seconds because the first time around the LetsEncrypt container of the reverse proxy has to request a certificate for the domain name given above.

You can observe progress by following the logs of the Letsencrypt container as follows:

docker logs nginx-letsencrypt --follow

The Second WordPress Blog Behind the Reverse Proxy

O.k. that’s nice, we have 5 running containers now and the WordPress instance can be reached over an https connection via its domain name. So now let’s be bold and bring up another WordPress blog with a different domain name (that was also tied previously to the IP address of the server). Just repeat the steps above but with different names for the folder, container and domain names and run docker-compose up -d in the other folder.

docker ps‘ now shows 7 containers. The WordPress instances are reachable only on the inside on port 80 and only the nginx container for the reverse proxy is reachable from the outside via port 80 and port 443.

Two Database Containers To Choose From…

And one more thing to have a look at in this post: Each WordPress project has its own database container with a separate IP address. In both database containers the database is reachable on port 3306. So how does each WordPress instance know to which (container) IP address to talk to? The answer lies in the docker-compose.yml file in the project directory. Here, the WORDPRESS_DB_HOST environment variable points to the container name of one of the database containers which we configured differently in the .env file CONTAINER_DB_NAME variable for each WordPress installation.

Summary

By now we have a setup with 7 containers to serve two WordPress Blogs with domain names and LetsEncrypt certificates. Installed with just a few commands and setting some variables, we didn’t have to deal with any kind of web server or proxy configuration at all. Also, we had to have no background information of how to install WordPress. Adding more WordPress sites takes only little additional effort and shows the power of the containerized approach for deploying many similar or identical services.

But the story is far from over. In the next part of this series, I’ll have a look at the following things:

  • Move the WordPress installation from part 1 from my local server to this public cloud server with the reverse proxy setup. This demonstrates the flexibility of containers.
  • How does the reverse proxy part of the setup notice that a new internal server is started and for which domain name a Letsencrypt certificate should be requested?
  • What is the chain of trust for the remote proxy setup, i.e. whom do I have to trust to keep this installation secure (in addition to the folks at nginx, WordPress and Docker)?

2 thoughts on “Dockerize Me – Several Services and LetsEncrypt – Part 3”

  1. Hi,

    interesting that you go down that whole rabbit hole. But I would like to point out (if you did not already come across it by yourself) that there is a docker container for exactly that reverse proxy problem named Traefik. The beauty of this solution is that you can add containers after the initial setup of Traefik without touching it again – setup and configuration is done via docker-labels – for example to be set inside docker-compose.yml. There is a multitude of howtos to be found and therefore I am not going to pester you with a whole list of links save this one here (on my “Heimatseite im Zwischennetz”): https://elbosso.github.io/tcp_routing_traefik_2_x.html#content
    Of course – by the time you go from docker-compose to kubernetes all that is built-in already in the whole concept of ingress controllers but as long as one is content with docker-compose, traefik really accelerates the setup of new containers – especially ones that offer a HTTP(S) interface, regardless wether it is a Browser interface or some REST services. I do build all my containers actually with the needed labels prepared in the docker-compose.yml now (ok, one more link: https://elbosso.github.io/tag_Docker.html

  2. Hi Juergen, I came across Traefik as well, looks like another good (or better?) solution. Thanks!

Comments are closed.