A bit of a history and hardware post today: 12 years ago, back in November 2009, I bought my last notebook I mainly used with Windows for a few years, a Toshiba Satellite L550. At €555 euros, it was a mid-priced notebook and surprisingly, it has lasted me 12 years, and was in use almost daily.
Originally shipped with Windows Vista, I upgraded to Windows 7 at some point in 2010. Some time later, I installed Ubuntu alongside and Windows was pushed to the sidelines. After a few years, I moved on to another notebook for my daily work, but due to its 17″ screen, the L550 remained in use as a video streaming device. For many years, the hardware was capable enough to stream most videos in the browser or watch local videos with VLC. So the only upgrade I made to that notebook in 12 years was switching from a 256 GB disk drive to a 256 GB SSD around 4 or 5 years ago.
But even with the SSD, the point had come after 12 years at which program startup delay compared to other devices in the household became very noticeable and not all videos, especially in the browser, would run smoothly anymore. So I replaced the device with a new HP 17″ notebook with an 11th generation Core i5 processor, i.e. a notebook with 11 hardware generations of enhancements. The price of the new notebook was almost exactly the same, which means, that prices in the mid-range segment have not significantly fallen over the decade. But obviously, many improvements have happened over time, so it’s interesting to compare the hardware of then and now to see what has changed.
In part 5 of this series, I’ve been looking at how companies like Linode, DigitalOcean and Amazon offer managed Kubernetes cluster instances. In other words, they provide Kubernetes clusters in freely configurable sizes, which can then be used for projects. The approach is quite different from how I thought it would work: Effectively, you get your own Kubernetes cluster(s) that work and feel the same way as any private Kubernetes installation, be it a Minikube, which I explored at the beginning of this series, or be it a huge private cluster installation. So how hard can it be to run the same exercises on such a managed remote cluster as on the Minikube?
Once upon I time when I started learning about Kubernetes, I thought it was a one-stop-shop to manage containers that are distributed over many servers. But the more I learn about the topic, the more I realize that while Kubernetes offers a lot of functionality, there are many things it doesn’t do out of the box, and for which 3rd party products are used. In recent weeks, I’ve come across the terms Istio and “Service Mesh” a lot, so it was time to have a closer look at what this actually is and which problems a service mesh solves in a Kubernetes cluster.
Yes, this is already part 6 of my ongoing Kubernetes intro series. In part 5, I have moved ever deeper into the cloud by looking at how to create managed Kubernetes clusters in Amazon’s and Linode’s clouds. Containers and Kubernetes are all about scale, so one might wake up one day with many Kubernetes clusters to manage. And you might have guessed it, that must be automated as well to further scale infrastructure. There are quite some tools available for managing all the Kubernetes clusters of an organization, and today I will have a look at three of them, Cluster API, Flux CD and Argo CD.
It’s amazing how in the past year or so, efforts in the telecom industry to move next generation systems into containers and manage them with Kubernetes have moved from theory to practice. The 5G core, for example, was specified by 3GPP in a cloud native way from the start, and even things like Open Radio Access Network (Open-RAN), whose specification effort started a bit earlier and hence is still based on Virtual Machine technology is moving quickly to container based solutions in the real world. This was one of the reasons while about a year ago I had another look at Docker and Kubernetes which resulted in my Docker and Kubernetes introduction posts on this blog. Also, I have dockerized a number of services I host for myself (e.g. this blog!) and use containers in my own software development and deployment process. This has made it much easier to spawn independent instances of my document research database for various friends in minutes instead of hours. But as far as Kubernetes is concerned, I don’t really have a practical use case myself, so I did not go beyond a Minikube installation. So one thing that always remained a bit opaque to me is how Amazon and other hyperscalers make Kubernetes clusters available from their data centers.
I’ve ended the previous post on the topic hinting that I saw a significant performance drop on that new Lenovo L14 Gen 2 notebook during ffmpeg encoding after around 60 seconds. Instead of reaching transcoding times of around 7 minutes for my sample video, and a speedup in the order of 6x, the speed-up indicator suddenly started going backwards and reaching 3,5x speed levels, which flatly doubled the transcoding time. Even 10 year old notebooks are faster! So I had a closer look what was going on and how this could be fixed.
And again there was a choice to make for another Linux notebook. This time, the main requirement was to get the latest CPU architecture at the risk of some rough edges that might be fixed later-on. As Lenovo usually has good Linux support, my choice fell on a Lenovo L14 Gen2 with an 11th generation Intel i5-1135G7 CPU with 4 cores and 8 threads. And I guess I got what I wanted, some excitement and adventure along the way.
It’s the time of the year again to have a look back at the things that moved me this year. On the surface, it seems that I had relatively few posts about wireless network technologies. But this appearance is quite deceptive.
A quick post today on a personal new backup speed record: I do have a lot of data on my notebook and thus use a 2 TB SSD, which is currently about 50% full. Also, I do like to have an emergency spare drive at hand so I can quickly get operational again should anything happen with my notebook or the drive. That means that I have a 1:1 copy of my installation at hand that I keep up-to-date by rsync’ing deltas in regular intervals. This process is usually quite fast as the deltas are relatively small. Creating a new spare SSD, however, requires copying 1 TB of existing data to the new drive, which previously took many hours. But I’ve refined my hardware and technique over time. So this time around, I got 1 TB of data to a new spare SSD in around 35 minutes. The sustained transfer speed was 490 MB/s, or 28 GB per minute between the two LUKS encrypted partitions!
In the past, I sometimes noticed that after updating and rebooting my cloud server that runs around 12 virtual machines, memory use would decrease a few hours after reboot. The reason for this is that the kernel looks for duplicate pages and combines them. And when running 12 virtual machines, most with the same operating system and applications, a lot of optimization is possible. But that’s about all I knew about it so far. Recently, however, I stumbled across the kernel feature that perform this optimization and reports interesting details to userspace upon request: The Kernel Samepage Merging.