Over the years I’ve been looking at Docker every now and then. That’s because for one thing I am interested in the concept of deploying software in lightweight containers instead of VMs and also because I am running two pieces of software in Docker containers on my servers as well. But that was it until now because Docker has a steep learning curve and it was pretty pointless for me to really go for it unless I have a use case myself. This year, however, my interest started growing for a number of reasons, so I decided to ride the wave and put together a curriculum that fits my needs. So here’s what motivated me to spend the time and effort and my recommendations for how to go about it.
The Reasons
So let’s talk about my reasons for undertaking this learning journey first. There are actually quite a number of them. The 5G core network (5GC), for example, is specified in a way to be deployed in a containerized fashion and a better understanding of Docker and how large numbers of containers are managed (orchestrated) with Kubernetes would be a plus to understand the whole story. Also, a number of services I host myself in separate virtual machines lend themselves to be run on a single machine in containers and the respective projects do offer Docker versions of their software. Understanding the deployment side of Docker and Kubernetes is only one side of the story, however. The other side is what I could do with Docker myself. I do have a rather large quality time software project that is difficult to deploy in practice because it takes many steps to set up the web server, the data base and the basic project configuration for a new installation. In other words, I did not only want to learn how to deploy Docker containers of other projects but also how to build Docker images for my projects as well.
Step 1: A Basic Installation
So here’s the story of how I went about it: To experiment with Docker it needs to be installed first. The documentation on the Docker website is a bit daunting for newcomers as it explains all the bells and whistles and assumes that the reader already knows a fair amount about Docker. So apart from a few pages on how to get going, I recommend NOT to look at the official Docker website until much later. Instead, search for independent tutorials that leave away all the complicated stuff that’s not needed at the beginning anyway.
Getting the runtime environment working is quite straight forward, even if the official Docker page that describes the installation does not look that way on first glance. In short, here’s how to install Docker on a Debian based Linux with just a few commands:
# Some preliminaries (the first sudo command probably # gives an error message. Ignore) sudo apt-get remove docker docker.io \ containerd runc sudo apt-get install apt-transport-https ca-certificates \ curl gnupg-agent software-properties-common curl # Add the Docker repository curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # And now install the 3 Docker packages sudo apt update sudo apt-get install docker-ce docker-ce-cli containerd.io # Add user to docker group so docker can be run as a # normal user sudo usermod -aG docker <YOUR-USERNAME>
A quick logout/login to apply the new group to the user and you are almost done with the basic installation.
Many projects use docker-compose to script the configuration, their interactions and the deployment of several containers at once. It’s also maintained by Docker and can be installed with two commands described here. In December 2020, the latest version was 1.27.4 and is installed on 64-bit Intel systems as follows:
# Download the binary sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # Make it executable sudo chmod +x /usr/local/bin/docker-compose # Check that things are working docker-compose --version
Step 2: The Low Hanging Fruit – Let’s Get A WordPress Blog Running
For my first experiments I decided to install Docker and Docker-Compose in a separate virtual machine on my server at home as there is no need to have it installed locally. After all, Docker is mainly intended for deploying services on servers rather than for local programs. To get things started with something ‘real’ I decided to experiment with installing the WordPress blogging software in containers. WordPress uses a database such as mysql which is installed in a separate container. WordPress, running in another container, then talks to the database over a virtual (TCP/IP) network link rather than via ‘localhost’ as it would if it was in the same container or as it would if WordPress was installed together with the database in a virtual machine. The great advantage of separating WordPress and the database is that each can be upgraded independently from the other and follows the ‘microservice’ design pattern and the idea that each container should only run one piece of software.
Side note: Understanding and experimenting with the concept separating different pieces of software into different containers helps a great deal to understand the 5G core network architecture, which is also based on small services that can be containerized. This is a significant break from earlier core network specifications that used big monolithic services that would be implemented together and run on a specific piece of hardware or in a virtual machine.
Coming back to the WordPress example: Here’s the link to the Docker web page that shows how it is done. In essence, all that has to be done is to create a directory, copy/paste the Docker compose script into a text file that has to be named docker-compose.yml
and then run docker compose as follows:
docker-compose up -d
And that’s it! The first time the command is run, the composer realizes that the two Docker images required for running the two containers are not yet present on the machine and downloads them from the Docker Hub image repository. Once that is done, the two images are used to spawn the two containers described in the compose file and the new WordPress blog can be reached on TCP port 8000. Yes, that is really all! No manual web server and database setup, no configuration, nothing!
Use ‘docker-compose down‘ from the same directory to stop the containers again. For running two blogs on the same server at the same time, each on a different TCP port, just do the same thing again in a different directory and modify the port mapping line.
Step 3: Getting To the Database and Upload Directory
One basic principle of the Docker approach is that containers are ‘ephemeral’, i.e. all data that is stored inside the container is lost when it is shut down. That means that all data that has to be persistent needs to be stored outside the container. In the example above, the database container uses a ‘named volume‘ in /var/lib/docker to store the database files. The WordPress container, however, uses no external storage at all. That means that images that are uploaded for blog posts are lost once the container is shut down. For a first demo that’s all right but not what one wants in practice. So, after shutting down the two containers with
docker-compose down
…I modified the docker-compose.yml file to store the database files and also the wordpress upload folder into ‘local‘ directories of the directory in which the yml file is located. I’ve pasted the contents of my modified yml file below with the changed and new lines in bold. This way, the uploaded images for blog posts are still present when restarting the WordPress container. Also, one can simply take the directories and the yml file, put them on a different sever and the blog comes up will all content there as well without any further ado.
version: '3.3' services: db: image: mysql:5.7 volumes: - ./db_data:/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest ports: - "8000:80" restart: always volumes: - ./wordpress:/var/www/html environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress WORDPRESS_DB_NAME: wordpress # MS: Not needed, as we use 'local' directories #volumes: # db_data: {}
Initial Data in Volumes
There is one important point to understand about ‘volumes‘, which are also referred to as ‘bind mounts’ in the Docker documentation: If directories that exist in the Docker image are mapped to local volumes (as in this example), the initial content from inside the Docker image is copied to the outside local volume the first time a the container is instantiated from the Docker image!!! They can then be modified from the outside and are not overwritten next time the container is instantiated again. Programs inside the container, however, can modify the files when the container is running. When a new container instance is started from the Docker image and the volume directory is already there, the changes that were done previously are left in place.
Next Steps
So far, so good, two containers are running and talk to each other. For a real world deployment one wants to have a bit more, however, such as, for example, SSL certificates for encryption and several websites being served to the outside world on the same TCP ports. I’ll continue with that in the next post.
I absolutely love Docker for the fluidity and ease it brings in to the application deployment and maintenance process! I switched over all of my self-hosted applications to docker last year and with Docker-stack, Traefik and Rsync, It’s been a game-changer for me.