After the six ‘Dockerize Me‘ and four ‘Kubernetes Intro‘ episodes that mainly dealt with getting a good understanding of how to use containers, the time has come for me to deal with the ‘end boss‘ of the game: Dockerizing the web-based database system I’ve been working on in my quality time over the years.
I have played with the thought of open-sourcing the code for some time, but installing Apache, MySQL, SSL certificates, etc., etc., and configuring the system is beyond most people who would be interested in using it. But putting the system in containers for easy deployment, operation and upgrades would change the game. So in this and the following episodes, I give an overview of the steps I have taken to containerize my system. I think the time two write this down is well invested, as the steps I have taken are a good blueprint, and will perhaps also be useful for your future projects.
In part 3 of my Dockerize-Me series I’ve been looking at how to run several web services on the same server, make them accessible on the same ports (80 for http and 443 for https) and add Letsencrypt TLS certificates for secure communication. This is typically done with a reverse http proxy setup as described in episode 3 (in Docker containers of course…). In the feedback, several people suggested that I should also look at Traefik as another alternative. So as I like choice, I did just that.
When people ask me for the ‘elevator pitch’ on ORAN my quick summary is that Open-RAN aims to:
1) separate hardware from software;
2) put all software that does not require dedicated hardware on virtualized (x86) servers or into containers ‘in the cloud’ and use high speed fiber links to connect to the remaining hardware (and software) at the distributed cell sites;
3) provide standardized interfaces between components in the RAN so software and hardware can come from different vendors.
4) increase competition to foster innovation and reduce prices.
There are lots of interfaces standardized in ORAN and each has its individual story of how and why. One interface of particular interest to me is the fiber connection between the digital baseband unit at a cell site, usually at the lower end of a tower, and the radio unit which is usually located at the top of a tower close to the antennas. This part of the RAN is also referred to as the ‘fronthaul‘ and the CPRI protocol is used to exchange data over the link. So how does that work today and how does ORAN want to change this interface to become more flexible and open?
In most cases when we are talking about ‘the cloud’ today, we are talking about virtual machines running on servers in a data center. The real servers below those VMs don’t have a screen attached nor do they have graphics hardware and the virtual machines running on them are usually only accessed for maintenance and configuration via ssh. That’s probably because running a virtual machine with a GUI and a desktop in the cloud that is then accessed over VNC or RDP is not done very often but comes in handy for quite a number of applications. When I recently experimented with this I noticed that there aren’t many descriptions around how to set up such a system. And those that exist did not lead to immediate success, as the way a Linux desktop is started seems to change constantly and depends on which desktop is used. Instructions how to launch a VM with a current version of the Xfce desktop did actually work but I wanted to have an Ubuntu 20.04 Gnome based desktop GUI on a VM in the cloud. After a couple of hours I managed to get a working setup and here is how it is done:
It happens quite often in my household that I am asked to have a look at an Office document to help with formatting and other things. More often than not the solution to the problem does not come easy and I prefer to work on the solution at my own desk. That means that the file has to be copied, which can be done in many different ways, but that’s usually time-consuming. Since I usually have file system access to the device, I have wished for a long time that I could just send myself a message that contains the path and name for the file in question. So when I had a bit of time recently, I put together an add-on for the Linux Nautilus and Nemo file managers to do just that.
Last year I wrote about a 5G Americas whitepaper that describes how 3GPP has standardized the 5G core in a way that lends itself for cloud based implementation. In short, that means that control plane functions are split into microservices, deployed on bare metal clusters in containers and managed with Kubernetes. So far the theory. Now, products designed this way are announced by first telecommunication vendors.
I’m almost happy with the basic hands-on understanding I have gained about Kubernetes about which I have written in part 1 to 3 of this series. I understand much better now how Kubernetes manages Docker containers, how it abstracts and manages the distribution of containers in a cluster of servers and how it makes services running in containers reachable from the outside world. From a developer and network administrator point of view, however, one important thing is still missing: How does Kubernetes manage persistent storage for containers? So let’s have a look at this and also experiment with a hands-on example: Running a WordPress Blog with a MySQL database in a Kubernetes cluster. As you will see, it’s not rocket science.
So here’s the story so far: In parts 1 and 2 of my Kubernetes intro story, we have set up a Minikube Kubernetes Cluster. We then deployed a container with an app inside, which was downloaded directly from the Kubernetes image hub, into our cluster. Our cluster is small, it only contains one worker node and the container we put into a pod was the only service running in our cluster. If you could follow this description you are now ready for part 3 of the story. Based on what we have done so far we now create our own app, create a Docker image in which the app can run and then deploy it into our Kubernetes cluster. In the end we will have two services running in the cluster: The Echoserver app from part 2 and the app we are going to put together in this episode.
In part 1, I’ve given an introduction to my path to learn more about Kubernetes with a hands-on approach. The story ended with Minikube being installed and a first sample application (the ‘http echosever’) up and running in a container that is managed by Kubernetes. With all of this in place now, the next logical step is to have a closer look at the browser based Kubernetes dashboard and what it shows about this small container deployment.
When we are talking about the 5G Core and implementations based on containers the story does not end with Docker containers. As a 5GC is not only based on one server and needs lots of redundancy, a management (orchestration) tool is required to manage containers across a large number of servers. There are several tools for this but it seems that Kubernetes is the tool of choice for most these days. I did a lot of reading about Kubernetes but the whole thing was still too abstract for me no matter how much I read about it. So I decided to get some hands-on experience myself. Here’s how I went about it in case you’d like to give it a go as well.