2.5GbE Performance Revisited – Workstation Power At Home – Part 9

After the both good and bad performance of my Lenovo X250 and Ubuntu 20.04 over a 2.5GbE link to my workstation, I couldn’t just leave the topic but had to further investigate how I could potentially improve throughput when using Ubuntu’s Nautilus file manager to transfer files to and from the workstation. And again, some surprises waited for me.

To recap: While scp with weak crypto from the command line gives me about 207 MB/s, Nautilus+sftp leaves me at around 125 MB/s. In both cases I read and write large 8-10 GB files to a LUKS encrypted SSD. So how does LUKS encryption impact the result? When writing to an unencrypted SSD partition, I could increase the data transfer rate to 150 MB/s for Natuilus and sftp. 25 MB/s more than with SSD encryption in software.

So, let’s give it another try with a somewhat more modern CPU. While my X250 notebook has an Intel 5th generation CPU, the other notebook I used for this test has a 7th generation CPU. In this setup, again with Ubuntu 20.04 but without SSD encryption I could get up to 200 MB/s. A nice result, but it requires a faster CPU and not using LUKS encryption for the SSD. While a faster CPU is in the realm of possibility in the future, not using encryption for the SSD is certainly not.

And now to a result I didn’t like at all from an open source point of view: When using Windows 10 on that 7th generation Intel CPU notebook, I could reach file transfer speeds in both directions of 230 MB/s to the workstation over Samba. That’s still a bit away from the iPerf3 throughput of almost 300 MB/s from the previous post, and also operated without software SSD encryption but it beats my Nautilus + sftp setup on the 7th generation CPU notebook. Hm…

2 thoughts on “2.5GbE Performance Revisited – Workstation Power At Home – Part 9”

  1. Try nfs. Samba is (as I recall it) essentially single threaded, while nfs is more multi threaded (at least the kernel implementation) . The downside of nfs is that all writes are synced to disk (unless you ask it not to) which is very slow.

    There’s also a risk that the initial part of the transfer might be fast due to fast buffers on your disks, but once they fill up the disk controller can’t hide the real time it takes to write your data to its slower cells.

    1. Hi Oscar, I did, see part 8. But the results in my configuration were also not very encouraging.

Comments are closed.