In a previous post I’ve explored the steps that are necessary to increase Owncloud’s file size limit to 3GB and checked that such large files can still be downloaded over a ‘slow’ 10 Mbit/s VDSL uplink when each gigabyte takes around 20 minutes to be transferred. In my initial use case I uploaded my large files locally over a high speed link so I didn’t encounter any timeouts. But what if large files are downloaded over slower links, is there a timeout in Apache or Owncloud that aborts the process after some time?
I’ve put this to the test with a 200 MB file that I slowly uploaded to Owncloud with a throttle put in place on the wireless network interface of my notebook. On Linux, ‘tc’ (traffic control) is a great tool that can be used for artificially slowing down traffic streams. The following three commands reduce the uplink speed from my notebook to my Owncloud server to just 256 kbit/s while leaving speeds to other destinations untouched:
tc qdisc add dev wlan0 root handle 1: cbq avpkt 1000 \ bandwidth 100mbit tc class add dev wlan0 parent 1: classid 1:1 cbq \ rate 256kbit allot 1500 prio 5 bounded isolated tc filter add dev wlan0 parent 1: protocol ip prio 16 \ u32 match ip dst 192.168.22.33 flowid 1:1 #remove the throttle again with this command: #tc qdisc del dev wlan0 root
At this speed it takes around 2 hours for the file to upload over Owncloud’s web interface. And to my satisfaction the server was patient enough for the file upload to be performed successfully. An ‘md5sum’ check then confirmed that the file on the server and the original on my notebook were identical which means there was no early abort during the file upload that I had missed.
256 kbit/s might be considered very slow but I was not interested in the speed but rather if Apache or Owncloud would run into a timeout. But perhaps 256 kbit/s is not so slow after all as many people with a DSL line are still limited today to uplink speeds of that order of magnitude. In other words, it was quite a realistic test after all!