In the previous post, I’ve been looking at a number of different companies that offer bare metal servers in their data centers. An interesting offer that is also the cheapest one I have found so far is from Scaleway. For a monthly price of 33 euros, they offer an Intel Xeon E3 E1220 or equivalent based server with 32 GB of RAM and 2x 1 TB SSDs, located in one of their Paris data centers. Compared to prices elsewhere this is very cheap. So where’s the catch?
Continue reading The Old CloudCategory: Uncategorized
Who Rents-Out Bare Metal Servers and How
Following on from the previous post about having a plan C for a bare metal server in the cloud for running my own services, I’ve had a look at a number of different data center operators in Europe and how they offer bare metal servers. I’ve been very happy so far with Hetzner, as they make it very simple to rent a physical server and get an operating system installed. If you already have an account, that bare metal server is only a few clicks away. Entry level offers with two 500GB SSDs start around 50 euros a month, currently without an installation fee, give or take a few euros. So what are others doing?
Continue reading Who Rents-Out Bare Metal Servers and HowThe Gigabit At Home Now a Requirement
And here we go, another capacity / demand cycle is coming to a close. The voices are getting weaker, but every now and then, people still ask me why somebody needs a 1 Gbps fiber link at home.
It is a valid question, and from my personal experience I could answer it so far that I frequently transfer large amounts of data, and I feel quite limited by my 100 Mbps VDSL line I have in Cologne, with no fiber in sight. I have definitely outgrown my VDSL line. So perhaps I am a bit of a special case. Well perhaps. But now, first main stream games are coming to the market that stream most of the data and content they require from servers ‘in the cloud’.
Continue reading The Gigabit At Home Now a RequirementThe Hetzner Plan C
Once upon a time, not so long ago, I decided to duplicate my services running on a bare metal server at home on a bare metal server I rent in a Hetzner data center. This has worked out really well. As the server offers ample capacity, I have additionally migrated quite a number of virtual machines with public IP addresses to it. In other words, there are a number of services now for which I do not have a redundant copy at home. So I needed a ‘Plan C’ in case that server goes south one day. Recently, I became aware of one more reason why that server could suddenly ‘go offline’ that had me raise an eyebrow: Malicious outgoing port scanning activities.
Continue reading The Hetzner Plan CVoLTE / VoWifi Transfer Smoothness
Perhaps a bit of a strange title but I recently attended an event that had excellent LTE/5G coverage outside buildings, and good Wi-Fi coverage mainly inside buildings. In other words: A good opportunity to have a look at how well an ongoing IMS voice call is switched between LTE (VoLTE) and Wi-Fi (VoWifi) these days.
Continue reading VoLTE / VoWifi Transfer SmoothnessTcpdump Inside a Container – What Can I See?
Using tcpump to trace on a Docker virtual bridge interface to see the traffic between all connected containers (see previous post) got me thinking a bit: What can I see if I ran tcpump inside a container connected to a bridge? Will I only be able to see my own traffic, or would I be able to see traffic between other containers as well?
Continue reading Tcpdump Inside a Container – What Can I See?Wireshark and Containers behind Proxies
If your web services run in Docker containers behind reverse proxies, you can of course run a tcpdump / Wireshark trace on the physical Ethernet interface of your server, or on the virtual Ethernet interface of the virtual machine your containers run in. That’s nice, but it only gives you the encrypted https traffic. So if your http server logs are not enough for debugging, it would be really nice to get to the unencrypted http traffic.
There are methods to forward the encryption keys from web browsers and servers to Wireshark, which will then do the decryption for you, but that’s a bit inconvenient. So let’s look for something that is easier to do: If you run your web services in containers behind a reverse proxy, it’s possible to remotely trace the decrypted requests via Docker’s virtual bridge interface to which the web services are connected to!
Continue reading Wireshark and Containers behind ProxiesPower and Fan Management With Ubuntu on Notebooks
When you look back a decade or so, one of the things that sometimes was a bit of a hassle when installing Linux on a notebook was to get fan control working correctly. And correctly means that the fan doesn’t run all the time, or on full speed for just a few seconds when the temperature rises, just to stop again and then repeating this every few seconds. Also, I noticed in the past that power control, which is a related topic, was sometimes not working well with older Linux kernels. As soon as the temperature started to rise, the CPU clock frequency went to the lowest possible setting and just remained there for a long time. In other words, the notebook would run fine, but was seriously lacking performance. Fortunately, this seems to be something of the past, and I didn’t have to tweak anything in that regard on the Lenovo and HP notebooks I installed Ubuntu on in the past two years. That being said, there are some interesting differences how power and heat management is handled on different notebooks with the same Ubuntu Linux (22.04), and I thought I’d document this here for my and your reference.
Continue reading Power and Fan Management With Ubuntu on NotebooksA Small Ubuntu in a Docker Container
Recently, I wanted to try out a few things around networking in a Docker container environment. What I wanted to have was a simple container I could open a Bash shell in. Turns out that it’s actually quite easy to do. As I wanted to play around with some options, I decided to use a docker-compose yaml file instead of instantiating the container from the command line. So here’s the docker-compose.yml content:
Continue reading A Small Ubuntu in a Docker ContainerNtfy and Keep-Alives…
To monitor my personal cloud and get instant notifications of events of various sorts on my smartphone, I’ve been using Gotify for many years. As it is based on TCP, keepalive packets have to be sent to keep NAT gateways happy. At the beginning, Gotify did so at a rate of 10 seconds. Far too much to be power efficient on cellular networks, the radio channel remained active all the time. Based on my feedback, the the keepalive timer was made configurable. With some trial and error, I then established that the NAT gateways between my server and my smartphone can easily cope with TCP keepalives of 7 to 8 minutes. So that is my setting for many years now, and things work very reliably and efficiently.
So far so good. For a recent project I also needed an instant notification solution. Gotify could not do the job for this project, however, because messages being pushed out should be delivered to many anonymous recipients that should only have read-access to the queue, i.e. they must not be able to send messages themselves. Gotify is a personal messaging server and all clients require a login and can not only read but also write to queues. So I started looking for something else and came across Ntfy, another great open source messaging solution. It is far more feature rich than Gotify, which is both good and bad. For my project, however, it does offer read-only anonymous queue access, so I’m strongly considering it as an option. It’s easy to set-up in a Docker container behind a reverse web-proxy and the documentation is outstanding!
So while I was happy at first, I soon noticed that the TCP keepalive time is set to 45 seconds. Due to timeouts and other things happening in cellular networks, this means that the radio connection is also pretty much active for most of the time while the Android app is running, it only goes idle (LTE RRC Release) for a few seconds, before the next paging comes in due to the next keepalive packet. Not ideal at all. The screenshot on the left shows what is happening on the LTE air interface.
While I was glad to see an option to configure the keepalive timer (keepalive-interval), the documentation notes that the app will only tolerate keepalive periods of up to 77 seconds.
# Note that the Android app has a hardcoded timeout at 77s,
# so it should be less than that.
#
# keepalive-interval: "45s"
Perhaps better than nothing but still far away from where I would like it to be, i.e. 7 to 8 minutes. Not sure if a ticket with the project to ask them to offer settings to change this would have a chance of success, but perhaps I should do it anyway?