Wireshark ‘Conversations’ analysis of a speedtest.net speed test
In part 8 of my Starlink series, I’ve been looking at downlink throughput with a Windows notebook. The maximum speed I could achieve with iperf3 was around 30 Mbps, while tools like speedtest.net and LibreSpeed could easily reach 150 to 200 Mbps. So what’s the difference and which speed test reflects real world use?
In my previous posts on Starlink, I’ve used Linux as the operating system on my notebook and on the network side server for my iPerf3 throughput measurements. Due to packet loss on the link, there is a significant difference between using different TCP congestion avoidance algorithms. When the standard Cubic algorithm is configured on both sides, the data transfer over a single TCP connection is 5 times slower than with BBR. So what happens when I use a Windows client over Starlink?
After seeing the many transmission errors over the Starlink air interface that TCP can handle quite well, particularly on Linux with the BBR congestion avoidance algorithm, I was of course wondering if this has an audible impact on real time voice services when used over Starlink. So here’s how that went:
Quick thought of the day: Yes, there is a reason why flight simulation software does not run so well on notebooks: They are optimized for power efficiency and not performance. That’s quite OK for every day tasks such as word processing, which typically requires less than 8 watts on my notebook, or running specialized software for mobile network analysis in virtual machines, which typically requires between 15 and 20 watts. But when simulating a virtual world, the stop-gap of around 25-30 watts of small notebooks is nowhere near enough. But just how much more power is used by a workstation when running flight simulation software took me quite by surprise when thinking about it a bit.
Starlink iperf3 uplink throughput with the TCP BBR congestion avoidance algorithm.
So here we go: After the interesting but somewhat slow Starlink uplink throughput results discussed in part 5, let’s have a look if the TCP BBR congestion avoidance algorithm can improve the situation. BBR is not configured out of the box, but I use it on all of my servers and workstations. BBR could be particularly helpful for uplink transmissions, as it is a sender side measure. Thus, it doesn’t matter which TCP algorithm is used on the receiver side, which, for this test, was a box in a data center .
In part 3and 4 of this Starlink discovery series, I’ve been taking a look and the downlink throughput performance. While the standard Cubic TCP congestion avoidance algorithm used by Linux produced only meager results, switching to BBR produced a five-fold throughput increase. So how about throughput in the uplink direction?
Data throughput during a 30 seconds TCP iperf3 session with TCP cubic for congestion avoidance.
In part 3 of this series, I’ve taken a look Starlink’s downlink performance with the non-standard TCP BBR congestion avoidance algorithm. Overall, I was quite happy with the result as, despite the variable channel and quite some packet loss, BBR kept overall throughput quite high. Cubic, the standard TCP congestion avoidance algorithm, is not quite as lenient on packet loss, so I was anxious to see how the system behaves in the default Linux TCP configuration. And it’s not pretty I’m afraid.
After the more high level parts 1and 2 on Starlink, it’s now time to have a closer look at how the Starlink downlink channel behaves. I’m totally amazed by the system and it performs very well in Germany. That being said, it probably comes as no surprise that on the IP layer, the graphs produced by the data transmission over satellites look very different from the same graphs produced by a data transfer over a fixed line VDSL link. For the comparison, I’ve used Starlink over the router’s built in Wi-Fi and compared it to my VDSL line at home, which is also connected to a 5 GHz Wi-Fi router.
Summer seems to have taken a break in the second half of July, and the day I wanted to test Starlink at another place and demonstrate it to some people was as inhospitable as it could probably get during a summer day in Germany: The temperature was well below 20 degrees Celsius, it was very windy, and we had everything from very cloudy but dry weather to periods with light to strong rain showers throughout the day. Or to look at it from a positive side: Perfect for testing Starlink in less than ideal conditions. So here’s how that went.
Starlink antenna affectionately called ‘Dishy’ by some on the rooftop
I’m sure you’ve seen the one or other satellite focused post on this blog over the last year. So far, I’ve mostly been looking into handheld satellite text messaging, but I always had a close eye on Starlink as well. Recently, Starlink has lowered their prices and introduced mobile tariffs that can be activated and deactivated on a monthly basis. Mobility is important for me, as I wanted to test and potentially use satellite based Internet access in a number of different places. Also, I don’t need it all year around, so being able to only pay for the months while in use is just what I was waiting for, too. In other words, the time had come for me to order a terminal and see how it really performs in the Cologne area in Germany.