Part 8 of this series took a closer look at the downlink performance of two Windows 11 based machines over Starlink. This episode now takes a closer look at the uplink.
Before I go on, let’s set expectations: As was shown in part 5and 6, there is a significant difference in uplink performance on Linux machines depending on which TCP congestion avoidance algorithm is used. With the default TCP Cubic algorithm, throughput was around 5 Mbps, whereas TCP BBR pushed uplink throughput to 30 Mbps.
So here’s how uplink throughput looks like on Windows with the default TCP parameters:
In part 8 of my Starlink series, I’ve been looking at downlink throughput with a Windows notebook. The maximum speed I could achieve with iperf3 was around 30 Mbps, while tools like speedtest.net and LibreSpeed could easily reach 150 to 200 Mbps. So what’s the difference and which speed test reflects real world use?
In my previous posts on Starlink, I’ve used Linux as the operating system on my notebook and on the network side server for my iPerf3 throughput measurements. Due to packet loss on the link, there is a significant difference between using different TCP congestion avoidance algorithms. When the standard Cubic algorithm is configured on both sides, the data transfer over a single TCP connection is 5 times slower than with BBR. So what happens when I use a Windows client over Starlink?
Quick thought of the day: Yes, there is a reason why flight simulation software does not run so well on notebooks: They are optimized for power efficiency and not performance. That’s quite OK for every day tasks such as word processing, which typically requires less than 8 watts on my notebook, or running specialized software for mobile network analysis in virtual machines, which typically requires between 15 and 20 watts. But when simulating a virtual world, the stop-gap of around 25-30 watts of small notebooks is nowhere near enough. But just how much more power is used by a workstation when running flight simulation software took me quite by surprise when thinking about it a bit.
So here we go: After the interesting but somewhat slow Starlink uplink throughput results discussed in part 5, let’s have a look if the TCP BBR congestion avoidance algorithm can improve the situation. BBR is not configured out of the box, but I use it on all of my servers and workstations. BBR could be particularly helpful for uplink transmissions, as it is a sender side measure. Thus, it doesn’t matter which TCP algorithm is used on the receiver side, which, for this test, was a box in a data center .
In part 3and 4 of this Starlink discovery series, I’ve been taking a look and the downlink throughput performance. While the standard Cubic TCP congestion avoidance algorithm used by Linux produced only meager results, switching to BBR produced a five-fold throughput increase. So how about throughput in the uplink direction?
In part 3 of this series, I’ve taken a look Starlink’s downlink performance with the non-standard TCP BBR congestion avoidance algorithm. Overall, I was quite happy with the result as, despite the variable channel and quite some packet loss, BBR kept overall throughput quite high. Cubic, the standard TCP congestion avoidance algorithm, is not quite as lenient on packet loss, so I was anxious to see how the system behaves in the default Linux TCP configuration. And it’s not pretty I’m afraid.
After the more high level parts 1and 2 on Starlink, it’s now time to have a closer look at how the Starlink downlink channel behaves. I’m totally amazed by the system and it performs very well in Germany. That being said, it probably comes as no surprise that on the IP layer, the graphs produced by the data transmission over satellites look very different from the same graphs produced by a data transfer over a fixed line VDSL link. For the comparison, I’ve used Starlink over the router’s built in Wi-Fi and compared it to my VDSL line at home, which is also connected to a 5 GHz Wi-Fi router.
Summer seems to have taken a break in the second half of July, and the day I wanted to test Starlink at another place and demonstrate it to some people was as inhospitable as it could probably get during a summer day in Germany: The temperature was well below 20 degrees Celsius, it was very windy, and we had everything from very cloudy but dry weather to periods with light to strong rain showers throughout the day. Or to look at it from a positive side: Perfect for testing Starlink in less than ideal conditions. So here’s how that went.