HDD Performance – Part 5 – Reading 7 TB Real World Data

Reading 7 TB of data in large and small files from my 8 TB HDD in MB/s over time

In my HDD performance analysis series, I would now like to move on and have a look how fast my ‘real world’ data on my backup hard disks can be read. At the moment, I have around 7 TB of data on my backup drives, which consists of a significant amount of very large virtual machine snapshot files with a size in the double digit gigabytes, many smaller sized image files of 2-3 MB and an even bigger number of very small document files of a few hundred kilobytes at most. So how fast can I read such a data mix from hard drives with moving heads?

Continue reading HDD Performance – Part 5 – Reading 7 TB Real World Data

HDD Performance – Part 4 – Writing Large Files to Empty Spaces

20 TB HDD, writing 2.5 TB of data after randomly deleting 50 GB files

And on we go with another round of looking at Hard Disk Drive performance. After writing 50 GB files to a number of hard drives in episode 2, I decided to have a look how the drives would perform after randomly deleting some about 2.5 TB of 50 GB files and then fill up the empty space again. Before running the tests my expectation was that the outer parts of the disk would be filled first, as write speeds are fastest there and write speeds would gradually reduce over time. The graph at the top of this post that plots the write data rate in MB/s over time shows something else, however.

Continue reading HDD Performance – Part 4 – Writing Large Files to Empty Spaces

HDD Performance – Part 3 – Measurement Setup

In part 1 and 2 of this series, I’ve had a first look at the performance of a number of different hard drives I use to back up large amounts of data. I currently have around 8 TB of data that needs to be backed-up regularly, so speed is of the essence and decides whether a backup cycle takes an hour, half a day, or even more. Before I go on with further measurement results, here’s a quick summary of how I collected my data:

Continue reading HDD Performance – Part 3 – Measurement Setup

So Painless – Upgrading to LineageOS 22 – Android 15

Just a quick note today about upgrading my Pixel 6 to from LineageOS 21 to LineageOS 22, which is Android 15 without the Google privacy invading parts:

While standard updates can be done on the phone itself, upgrading from one major version to the next requires side-loading. I’m always a bit weary of doing this, because I’m not keen on bricking a device that I use 24/7, or to reinstall stuff. But LineageOS points out that going from one major version to the next with sideloading will keep the data in place. I’ve done it twice now on my Pixel 6 and it worked both times.

So here is the deal: It only takes about 10 minutes and 2 shell commands on the PC: One adb command to reboot in sideloading mode and one adb command to sideload the latest Lineage ...signed.zip image. And that’s it.

For the full details on how to update from one major version to the next, have a look here for all update options or go straight here for the ‘manual’ major version jump info.

HDD Performance – Part 2 – Huge Files Write Performance

In the previous post on this topic, I had a closer look at the write performance of the best backup hard drive I had in stock at home. I bought that drive recently, because I was running out of space on my other drives and because they somehow did not seem to behave as snappy as when I initially bought them. So let’s have a look and compare.

Continue reading HDD Performance – Part 2 – Huge Files Write Performance

10 Years of Fiber in Paris

Incredible! Today I realized that it’s the 10th anniversary of fiber connectivity at our home in Paris! 10 YEARS! Here’s my original post from November 2024. I am a bit speechless. These days, I could even upgrade to a 10 Gbit/s connection. Overall, I’d say it’s a success story, with a few bumps and bruises in between. The biggest one was certainly 2 years ago, when it took 4 months after my fiber line failed to get it back in service. But OK, you live and you learn. Forget competition and resellers, just go to the company that owns the fiber. But I don’t want to dwell on this today. 10 YEARS!

HDD Performance – Part 1 – Huge Files on a New 20TB Drive

My data heap keeps growing and I do have a good multi-layer and multi-location backup strategy. Offline and off-site storage is the motto of the day, which requires hard disks with large capacities so data can be physically moved. So far, I used several 8 TB hard disks to which I would sync the data from various sources. I’ve come to a point however, where 8 TB is no longer enough and incidentally, I noticed a significant slow down during my backup procedures. So I bought my first 20 TB drive which, so far, performs very nicely. But I really do wonder why my 8 TB drives seem to have slowed down so much while that new shiny 20 TB drive (still?) performs much better. So it was time to do some benchmark tests with different drives and real world data so I can see how new drives perform with my data and analyze performance of existing drives. But why do I care? Because it makes a huge difference if 10 TB of data is moved to or from a disk drive at an average of 50 MB/s or 200 MB/s. At 50 MB/s, moving such an amount of data requires 55 hours, while at 200 MB/s it only takes 13 hours. And we are not even talking 20 TB yet. You see where this goes…

Continue reading HDD Performance – Part 1 – Huge Files on a New 20TB Drive

500 Mbps Bandwidth Throttling – Part 2

In the previous post, I’ve had a look at how a high speed data transmission is throttled to 500 Mbps between two data centers in different countries. In this post, I’ll have a look how the TCP sequence- and transmission window graphs look like for the same throttling scenario when I downloaded data from my server over an FTTH fiber line in Paris.

Continue reading 500 Mbps Bandwidth Throttling – Part 2

500 Mbps Bandwidth Throttling – Part 1

A few months ago, I moved my services such as this blog from a bare metal server in a data center in Finland to another bare metal server in France. One drawback of the move was that the bandwidth to the server is limited to 500 Mbps instead of the 1 Gbps the network interface could provide. And indeed, the data center operator does enforce the 500 Mbps limit in the downlink direction. Recently, I wondered how that is actually done in practice and had a closer look with Wireshark. As you can see above, the result is quite interesting!

Continue reading 500 Mbps Bandwidth Throttling – Part 1

Bucket Watching – S3 at Hetzner and Scaleway

I’m old school, I like locally attached block devices for data storage. Agreed, we are living in the age of cloud, but so far, the amount of data I store cloud at home and in data centers could always be placed on block devices, i.e. flash drives directly connected to that server. Recently, however, I’ve been thinking a bit about how to store images and videos in the cloud and how to upload and synchronize such data from different devices. That means that a few hundred gigabyte will definitely not do anymore, we are quickly talking about TBs here. Locally attached or network block storage in a data center of such a magnitude is quite expensive, we are talking 50 to 100 euros a month per TB. But perhaps there is another option? Many cloud providers also offer S3 compatible object storage today at one tenth of the cost, i.e. €6 per TB per month. Could that be an alternative?

Continue reading Bucket Watching – S3 at Hetzner and Scaleway