HTTP/3 and QUIC

A while ago, I had a closer look at HTTP/2 (from a 5G core network point of view), and how a client could detect during connection establishment if it can use this flavor of the protocol or not. The short answer to the question is that the client and the server use an extension parameter of the TLS protocol during the authentication and ciphering exchange. In the meantime, the world has moved on, and HTTP/3 has made it out of the starting gate and is already used in practice. Unlike previous versions of the HTTP protocol that use TCP, HTTP/3 is based on UDP and the new QUIC protocol, which implements TCP like flow control and a number of other improvements to speed up the simultaneous transfer of many different files that usually comprise a web page these days. And so I had the same question again: How does the browser detect that it can use HTTP/3, and, as a consequence UDP/QUIC, for a web page instead of TCP?

Continue reading HTTP/3 and QUIC

Wireshark: Tracing Encrypted HTTP/2 Traffic

After optimizing Firefox HTTP/1.1 settings for slow wireless connections 17 years ago (!) , I pretty much forgot all about it again, because networks became faster and default browser settings and features were adapted for the cellular world. Only recently did I have another look at HTTP, when I noticed that HTTP/2 is now widely used in practice, and also plays a big role in the 5G core networks. From a security point of view, the great thing about HTTP/2 is that browsers only support TLS encrypted HTTPS connections. This has the downside, of course, that tracing and analyzing HTTP/2 connections with Wireshark is not possible out of the box any more. But there’s a fix for this!

Continue reading Wireshark: Tracing Encrypted HTTP/2 Traffic

The 2 Hour Kernel Compile

Recently, Ubuntu 20.04 LTS bumped the Linux kernel version they use from 5.11 to 5.13. While that is generally welcome to support newer hardware, it unfortunately also broke the suspend/resume functionality of my notebook with an AMD Ryzen 7 4750U CPU. Bummer!

This seems to be a known problem in the 5.13 kernel line, and was subsequently fixed somewhere down the road. It’s definitely ok again in the current Linux 5.15 long term support kernel. More good news: Ubuntu 20.04 is likely to switch to 5.15 in a few months from now, and I could of course just stay with the last 5.11 kernel until that time. However, I’m not really happy with that, because security issues are no longer fixed in in 5.11, which potentially exposes me to security vulnerabilities in the next couple of months.

Long story short: To be on the save side, I started looking for ways to use a 5.15 kernel with Ubuntu 20.04 until Canonical moves to that kernel line on their own. Fortunately, there are options:

Continue reading The 2 Hour Kernel Compile

Data-transfer rate of a 4k Youtube Video

Depending on the device you are watching a video from Youtube on, the service will send you a version of the video that fits your current device’s screen resolution, and also takes potential network limits into account. So if a video is available in 4k resolution, and if you have a 4k device, and the pipe between Youtube and the device is not the limit, what will the data-transfer rate be?

Continue reading Data-transfer rate of a 4k Youtube Video

Kubernetes Intro – Part 11 – Helm Charts Revisited

In part 8 of this series on Kubernetes, I’ve used ‘Helm’ and ‘Helm Charts’ as an easy way to deploy a complicated Kubernetes Ingress configuration with a few commands. At the time, I left it at that and decided to come back later and explore ‘Helm’ in more detail. So here we go. In this post, I’ll have a look at what Helm is and show how to create a Helm chart for deploying several WordPress Blogs with a MySQL databases into a managed cluster.

Continue reading Kubernetes Intro – Part 11 – Helm Charts Revisited

Daisy Chaining Wi-Fi Repeaters

A few meters can make a difference between 80 Mbit/s and zero reception. Recently, I was asked to see if I could improve the in-house LTE coverage of a small home. On the ground floor, even close to the windows, reception was pretty much zero. On the third floor, where one has a nice overview of the town, however, I could easily get 80 Mbit/s from the external LTE network. So the solution was clear, let’s put an LTE / Wi-Fi router there instead of using LTE directly on mobile devices on the ground floor. The problem: Even though the house is old and not much steel has probably been used to build it, the Wi-Fi signal wouldn’t go well through two floors.

Continue reading Daisy Chaining Wi-Fi Repeaters

Chasing Bit Rot with MD5 Hashes

When I recently copied a couple of hundred gigabytes from one backup drive to another backup drive, things did not go quite according to plan: After a few hours, the Linux kernel reported read issues on the source drive and remounted the drive in read-only mode. Rsync borked and aborted. I eventually fixed this by using a different cable and by connecting the drive to a USB port directly on my notebook. But this shouldn’t have happened in the first place, so I was wondering a bit how my backup on that drive is really doing. So I decided to have a closer look by verifying the integrity of the data on the drive. The results were interesting.

Continue reading Chasing Bit Rot with MD5 Hashes

Kubernetes Intro – Part 10 – Persistent Storage in a Managed Cluster

Wow, this is part 10 in my series on how to get started with Kubernetes! I am obviously having a lot of fun with the topic, and it’s really nice to be able to experiment with the technology, as it is not only the basis for 5G core networks, but massively transforms all parts of telecommunication networks in general. Today’s topic: How to store data persistently in a managed Kubernetes cluster (with a practical example, of course).

Continue reading Kubernetes Intro – Part 10 – Persistent Storage in a Managed Cluster

Kubernetes Intro – Part 9 – Deploying Your Own App

And I’m moving along with the exploration of how to use a managed Kubernetes cluster. In the previous episode, I’ve gone into the details of how to deploy applications into a cluster and hook them up to an Ingress load balancer, so they are reachable from the outside. In this episode, I want to expand on Part 3, in which I explored how to develop a simple node.js based app locally and push it into a local Minikube. The challenge: How can I push my locally developed app into a remote managed Kubernetes cluster?

Continue reading Kubernetes Intro – Part 9 – Deploying Your Own App

Decommissioning My 2009 Notebook And A Look Inside

A bit of a history and hardware post today: 12 years ago, back in November 2009, I bought my last notebook I mainly used with Windows for a few years, a Toshiba Satellite L550. At €555 euros, it was a mid-priced notebook and surprisingly, it has lasted me 12 years, and was in use almost daily.

Originally shipped with Windows Vista, I upgraded to Windows 7 at some point in 2010. Some time later, I installed Ubuntu alongside and Windows was pushed to the sidelines. After a few years, I moved on to another notebook for my daily work, but due to its 17″ screen, the L550 remained in use as a video streaming device. For many years, the hardware was capable enough to stream most videos in the browser or watch local videos with VLC. So the only upgrade I made to that notebook in 12 years was switching from a 256 GB disk drive to a 256 GB SSD around 4 or 5 years ago.

But even with the SSD, the point had come after 12 years at which program startup delay compared to other devices in the household became very noticeable and not all videos, especially in the browser, would run smoothly anymore. So I replaced the device with a new HP 17″ notebook with an 11th generation Core i5 processor, i.e. a notebook with 11 hardware generations of enhancements. The price of the new notebook was almost exactly the same, which means, that prices in the mid-range segment have not significantly fallen over the decade. But obviously, many improvements have happened over time, so it’s interesting to compare the hardware of then and now to see what has changed.

Continue reading Decommissioning My 2009 Notebook And A Look Inside