App to Docker – Part 2 – Environment Variables

In the previous part, I’ve started working on getting the server-side code of my web-based database application into containers for easy deployment, maintainability, and update. The particular focus in that episode was how to automatically pre-populate volumes for persistent data with things such as default configuration files. In this part, I’ll have a look at how to use environment variables to configure parameters required for my container to communicate with an accompanying MySQL database container.

I could of course have installed MySQL in my Apache Web Server / Code container but that would not have been an elegant overall solution. By putting the database in a separate container, I can use a ‘ready-to-go’ container provided by the MySQL team that is configured with environment variables. Also, this might make the move from MySQL to MariaDB simpler once I have a bit of time to do that.

In my server-side code I use a configuration file to provide the code with location, username, and password to the database. Also, manually installing the system meant to manually configure the database, which is a bit of a pain, particularly in a Docker environment. The MySQL Docker container on the other hand automatically configures a database when it starts for the first time and I, of course, wanted to use this mechanism too, to make everything work out of the box. So I decided to add the use of environment variables to my code and fall back to variables from the configuration file (for manual installation) if those are not provided. Here’s how such a piece of code looks like in PHP:

self::$_mysqlPass = getenv('WEBDB_PASSWORD');     
if (self::$_mysqlPass === false) {
         $log->lwrite('NO ENV VARIABLE FOR MYSQL PWD, using config');
         self::$_mysqlPass = $CONFIG['mysqlPass'];     
}

For each variable that is read per default from the config file, two lines of extra code are needed to read the corresponding environment variable, and, if it does not exist, fall back to the variable provided in the config file. Also, I added comments to the default configuration file and the documentation of the system so this will not be forgotten. And here’s how the environment variables look like in the docker-compose.yml file:

version: '3.3'

 services:
   db:
     image: mysql:5.7
     volumes:
       - ./volumes/db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: zG3uaKeKV4CZ4FuS
       MYSQL_DATABASE: webdb_docker_main
       MYSQL_USER: webdb_docker_user
       MYSQL_PASSWORD: qJg6LQfBZ4fnfrXe

   webdb:
     depends_on:
       - db
     image: webdb:latest
     ports:
       - "8500:80"
     restart: always
     volumes:
       - ./volumes/webdb-cfg:/var/www/html/webdb/config
       - ./volumes/webdb-pwd-dir:/webdb-pwd-dir
     environment:
       WEBDB_DB_HOST: db:3306
       WEBDB_DATABASE: webdb_docker_main
       WEBDB_USER: webdb_docker_user
       WEBDB_PASSWORD: qJg6LQfBZ4fnfrXe

       WEBDB_SERVER_PORT: 8500
       WEBDB_SERVER_PROTOCOL: http
       WEBDB_VIRTUAL_HOST: localhost

Truth be told, it is not strictly necessary to have environment variables in my container for the database part, I could use the existing variables in the configuration file as well. However, two reasons speak against this.

First, it’s good to have those variables in the docker-compose.yml file, so one can match them easily to the same variables on the database container side. Having one copy for the database container in the yml file and the other copy of those variables in a config file is not ideal.

The second reason, which is probably even more important is the WEBDB_DB_HOST variable, which contains the host and the TCP port number for the database. The default configuration file for a ‘manual’ installation would point to localhost or localhost:3306, as the database usually runs on the same server. In a Docker environment, the database runs in another container, so the host name can’t be ‘localhost’. That’s not ideal because I would need two different default configuration files, one for manual installations and one for the Docker environment.

More Environment Variables Needed

In addition to the database configuration, I noticed during testing that I needed a number of extra variables that are not required for a manual installation that directly connects to the Internet. For security reasons, the server-side code checks which domain name was used for accessing the web-server. And for creating permalinks to particular database records, the hostname, the protocol (http, https) and the TCP port number is required. All of this information can not be gathered from the web-server if the container runs behind a reverse proxy, because all communication would run over http and port 80, even if the reverse-proxy was contacted over https and port 443. So again, I had to adapt my code running on the server to use environment variables if they are present. There’s no way around it. Not that the example above does not reflect this because the docker-compose.yml file is a version without a reverse proxy. The one that uses a reverse-proxy fronted looks as follows:

version: '3.3'

 services:
   db:
     image: mysql:5.7
     volumes:
       - ./volumes/db_data:/var/lib/mysql
[...]

   webdb:
     depends_on:
       - db
     image: webdb:latest
     restart: always
[...]
     environment:
       WEBDB_DB_HOST: db:3306
[...]

       WEBDB_SERVER_PORT: 443
       WEBDB_SERVER_PROTOCOL: https
       WEBDB_VIRTUAL_HOST: xx.yyy.de

       VIRTUAL_HOST: xx.yyy.de
       LETSENCRYPT_HOST: xx.yyy.de
       LETSENCRYPT_EMAIL: noreply@test.com

 networks:
     default:
       external:
         name: webproxy

So while I was hoping at first that I would not need to write extra code on the server-side, I came to the conclusion that it’s better to do just that instead of coming up with a kludge.

So far, so good. With the few things described in this and the previous episode, I’ve containerized my web database server and made it really simple to deploy it in practice. Apart from creating a directory for Docker compose and unpacking a tar file that contains the source code and the Docker configuration files, a demo setup is only two commands away:

docker build -t webdb .
docker-compose up -d

And that’s it. After a little while, the service is available on localhost at port 8500 as per the docker-compose file above. Doing everything manually would have taken even me with an in-depth knowledge of the system the better part of an hour. This way, I can do it in a minute!

So much for today. In the next episode, I’ll have a look at the update process for an already existing installation.