My Exodus from Truecrypt to DM-Crypt Is Complete

Back in August I wrote that I had started my exodus from Truecrypt as the software is no longer supported by its authors. Over the months I've experimented a lot with dm-crypt on Linux to see if it is a workable alternative for me. As it turns out, dm-crypt works great and here's how my migration went. It's a bit of a long story but since I did a couple of other things along the way that are typical maintenance tasks that have to be done when running out of disk space, I thought it's perhaps a story worthwhile to be told to pass on the tips and tricks I picked up along the way from different sources.

Migrating My Backup Drives To DM-Crypt

At first I migrated my backup hard drives from Truecrypt to dm-crypt while I stayed with Truecrypt on my PC. Instead of using a dm-crypt container file I chose to create a dm-encrypted partition on my backup drives with Ubuntu's “Disk Utility”. Ubuntu automatically recognizes the dm-crypt partition when I connect the backup hard drives to the PC and asks for the password. Pretty much foolproof.

Running Out Of Disk Space Faster Than I Thought

The next step came when my 500 GB SSD drive was close to becoming full and I had to get a bigger SSD. Fortunately prices have come down quite a bit over the last year once again and a 1 TB Samsung 840 EVO was to be had for little over 300 euros. I had some time to experiment with different migration options as the 840 EVO had a firmware bug that decreased file read speeds over time so I chose to wait with my migration until Samsung had a fix.

DM-Crypt Partitions Can Be Mounted During the Boot Process

A major positive surprise during those trial runs was that even my somewhat older Ubuntu 12.04 LTS recognizes the dm-crypt partition during the boot process when configured in the “fstab” and “crypttab” configuration files and asks for the password during the boot process before the user login screen is shown. Perfect!

Here's how my “/etc/cryptab” entry looks like:

# create a /dev/mapper device for the encrypted drive
data   /dev/sda3     none luks,discard

And here's how my “/etc/fstab” entry looks like:

# /media/data LUKS
/dev/mapper/data /media/data ext4 discard,rw 0 0

Sins Of The Past – Hard Disk Migration The Hard Way

When I initially upgraded my from a 350 GB hard drive to a 500 GB SSD I used Clonezilla to make a 1:1 copy of my hard drive to the SSD and used the extra space for a separate partition. After all, I couldn't imagine that I would run out of disk space on the initial 350 GB partition anytime soon. That was a bad mistake as it turned out pretty quickly, as the virtual machine images on that partition soon grew beyond 200 GB. As a consequence I moved my Truecrypt container file to the spare partition but that only delayed the inevitable for a couple of months. In the end I was stuck with about 50 GB left on the primary partition and 100 GB on the spare partition, with the virtual machine images threatening to eat up the remaining space in the next months.

As a consequence, I decided that once I moved to a 1 TB SSD, I would change my partitions and migrate to a classic separation of the OS in a small system partition and a large user data partition. I left the system partition unencrypted as the temp directory is in memory, the swap partition is a separately encrypted partition anyway and the default user directories are file system encrypted. In other words, I decided to only encrypt the second partition with dm-crypt in which I would store the bulk of my user data and to which I would link from my home directory.

Advantages of a Non-Encrypted System Partition

There are a couple of advantages of a non-encrypted system partition. The first one is that in case something goes wrong and the notebook refuses to boot, classic tools can be used to repair the installation. The second advantage is that Clonezilla can back up the system partition very quickly because it can see the file system and hence only needs to read and compress the sectors of the partition that are filled with data. In practice my system partition contains around 20 GB of data which Clonezilla can copy in a couple of minutes even on my relatively slow Intel i3 based notebook. If I used dm-crypt for the system partition, Clonezilla would have to back up each and every sector of the 120 GB partition.

Minimum Downtime Considerations

The next exodus challenge was how to migrate to the 1 TB SSD with minimum downtime. As this is quite a time intensive process during which I can't use the notebook I played with several options. The first one I tried was to use Clonezilla to only copy over the 350 GB primary partition to the new SSD and then shrink it down to around 120 GB. This works quite well but it requires shrinking the partition before recreating the swap partition and then manually reinstalling the boot sector.  Reinstalling the boot sector is a bit tricky if done manually but the Boot-Repair-Disk project pretty much automates the process. The advantage of only copying one partition obviously is that it speeds things up quite a bit. In the end I chose another option when the time came and that was to use Clonezilla to make a 1:1 copy of my 500 GB SSD including all partitions to the 1 TB SSD. This saved me the hassle of recreating the boot sector and I had the time for it anyway as I ran the job over night.

Tweaking, Recreating and Encrypting Partitions On The New SSD

Once that was done I had a fully functional image on the 1 TB SSD with a working boot sector and to continue the work, I put it into another notebook. This way I could finish the migration while I was still being able to work on my main notebook. At this point, I deleted all data on my spare partition on the 1 TB SSD and also the virtual machine images on the primary partition. This left about 20 GB on the system partition. I then booted from a live Ubuntu system from a CD and used “gparted” to decrease the system partition from 350 GB down to 120 GB and to recreate a Linux swap partition right after the new and smaller system partition. Like the 1:1 Clonezilla copy process eariler, this takes quite a while. This is not a problem, however, as I could still work on the 'old' SSD and even change data there as migrating the data would only come later. Once the new drive was repartitioned I rebooted into the system on my spare notebook and used Ubuntu's “Disk Utility” to create the dm-crypt user partition in the 880 GB of remaining space on the SSD.

Auto-Mounting The Encrypted Partition and Filling It With Data

As described above it's possible to auto-mount the encrypted partition during the boot process so the partition is available before user login. As in my previous installation where I mapped the “Documents” folder and a couple of other directories to the Truecrypt volume, I removed the logical links for that and recreated new ones that pointed to empty directories on the new dm-crypt volume. And once that was done it was time to migrate all my data including the virtual machine images to the new SSD. I did this by backing up all my data to one of my cold-storage backup disks as usual and restoring it from there to the new SSD. The backup only takes a couple of minutes as LuckyBackup is pretty efficient by only coping new and altered files. To keep the downtime to a minimum I swapped the SSDs after I made the copy to the backup drives and started working with the 1 TB SSD in my production notebook. Obviously I restored the email directory and the most important virtual machine images first so I could continue working with those while the rest of the data was copied over in the background.

Thunderbird Is A Special Bird

In my Truecrypt installation I used a logical link for the mail directory so I could have it on the Truecrypt volume while the rest of the Thunderbird installation remained in the user directory. At first I thought it was only necessary to replace the local link to the mail folder but it turned out that Thunderbird also keeps the full path in its settings and doesn't care much about logical links. Fortunately the full paths can be changed in "Preferences – Mail Setup".

Summary

There we go, this is the story of my migration away from Truecrypt, upgrading to bigger SSD and cleaning up my installation at the same time. I'm glad I could try all things on a separate notebook first without Ubuntu complaining or making things difficult when it detected different hardware as other operating systems perhaps would have. Quite a number of steps ended up in trial and error sessions that would have resulted in a lot of stress if I hadn't known about them during the real migration. It's been a lot of work but it was worth it!

A 2 Amp USB Charger Is Great – If A Device Makes Use Of It

The smallest 2 ampere USB charger I've come accross so far is from Samsung and my Galaxy S4 makes almost full use of its capabilities by drawing 1.6 amperes when the battery is almost empty. In case you are wondering how I know, have a look at the measurement tool I used for measuring the power consumption of a Raspberry Pi. What I was quite surprised about, however, was that all other devices I tried it with, including a new iPhone 6, only charge at 1 ampere at most. I wondered why that is so I dug a bit deeper. Here's a summary of what I've found:

One reason for not drawing more than 1A out of the charger is that some devices simply aren't capable to charge at higher rates, no matter which charger is used. The other reason is that USB charging is only standardized up to 900 mA and everything above is proprietary. Here's how it works:

  • When a device is first connected to USB it may only draw 100 mA until it knows what kind of power source is behind the cable.
  • If it's a PC or a hub, the device can request to get more power and, if granted, may draw up to 450 mA out of us USB2 connector. And that's as much as my S4 will draw out of the USB connector of my PC.
  • USB3 connectors can supply up to 900 mA with the same mechanism.
  • Beyond the 450 mA USB2 / 900 mA USB3, the USB Charging Specification v1.1 that was published in 2007 defines two types of charging ports. The first is called Charging Downstream Port (CDP). When a device recognizes such a USB2 port it can draw up to 900 mA of power while still transferring data.
  • The second type of USB charging port defined by v1.1 of the spec is the Dedicated Charging Port (DCP). No data transfers are possible on such a port but it can deliver a current between 500 mA and 1.5A. On such a port the D+ and D- data lines are shortened over a 200 Ohm resistor so the device can find out that it's not connected to a USB data port. Further, a device recognizes how much current it can draw out of such a port by monitoring the voltage drop when current consumption is increased.
  • With v1.2 of the charging specification, published in September 2010, a Dedicated Charging Port may supply up to 5A of current.

And that's as far as the standardized solutions go. In addition there are also some Apple and Samsung proprietary solutions to indicate the maximum current their chargers can supply:

  • Apple 2.1 Ampere
  • Apple 2.4 Ampere
  • Samsung 2.4 Ampere

There we go, quite a complicated state of affairs. No wonder, only one device I have makes use of the potential of my 2A travel charger. For more information, have a look at the USB article on Wikipedia that also contains links to the specifications and the external blog posts here, here and here.

Raising the Shields – Part 14: Skype Jumps Into My VPN Tunnel Despite The NAT

According to public wisdom, the days when Skype was secure are long gone and I use my own instant messaging server to communicate securely when it comes to text messaging. When it comes to video calling, however, there are few alternatives at the moment that are as universal, as easy to use and with a similar video quality. Under normal circumstances Skype video calls are peer to peer, i.e. there is no central instance on which the voice and video packets can be intercepted. That's a good thing and Skype has many ways to find out if a direct link between two Skype clients can be established.

And here's a really interesting scenario: Skype is even able to find out that a direct link can be established through my VPN link I usually establish with my VPN server at home when I'm traveling and a Skype client on a PC at home despite a NAT between the VPN link and the local home network. That means that when I'm traveling, Skype packets are routed directly between the Skype client running on a PC at home and the Skype client on my notebook that is connected to my home network over a VPN tunnel. At no time do such Skype packets traverse a link on the Internet outside the VPN tunnel. In other words, potential attackers that can passively collect packets between where I am and my home network are unable to decrypt my Skype traffic, should they have such an ability.

Sure, Skype and anyone who has access to Skype can still find out if and when I'm online, probably even where I'm online and when and to whom I make calls. The call content, however, can't be intercepted without me noticing, i.e. when the traffic suddenly is not peer-to-peer through the VPN tunnel anymore. Far from perfect, but something to work with for the moment.

Opera Turbo Turned Off After 30 Seconds

Opera and it's server side compression have helped me a lot over the years to overcome issues like slow connections or strange operator proxys blocking access to websites such as the strange case I came across back in 2008. Fortunately, networks have become faster and other strange effects caused by meddling with data have also receded so I usually use the full Opera browser these days instead of Opera Mini or the Opera Turbo functionality. But every now and then I end up in a GSM-only place and so far the server side compression always helped. Well, up until now.

When I recently wanted to use Opera Turbo again to browse my favorite websites in a bandwidth starved area it took a long time because all advertisement I can block so conveniently locally with a modified hosts file had to be loaded again. Not only did it take long to load the pages due to the advertisement but splash screens and other intrusive advertisement is just not my cup of cake. So after about 30 seconds I switched off Opera Turbo again and resorted to a non-proxied connection, which was not slower for my favorite pages than using server side compression as all advertisement was stripped out. And not only was it not slower, I also didn't have to put up with splash screen advertisement. So for me the days of using server side compression to speed up my web experience in bandwidth limited areas are definitely over…

Another LTE First For Me: Intercontinental Roaming

I've had quite a couple of LTE and roaming firsts this year and, as I've laid down in this post, 2014 is the year when affordable global Internet roaming finally became a reality. Apart from having used a couple of LTE networks in Europe over the last couple of months I can now also report my first intercontinental LTE experience. When I recently traveled with my German SIM card to the United States, I was greeted by an LTE logo from both the T-Mobile US and AT&T network. Data connectivity was as quick (but I didn't run speed tests  so I can't give a number) and with the 20 bands supported by my mobile device I could actually detect quite a number of LTE networks at the place in Southern California where I stayed for a week:

  • Verizon was active in band 13 (700 MHz)
  • Metro-PCS in Band 4 (1700/2100 MHz)
  • AT&T was available in band 4 (1700/2100 MHz) and Band 17 (700 MHz)
  • Sprint had a carrier on air in band 25 (1900 MHz, FDD) and band 41 (2500 MHz, TDD)
  • T-Mobile US had a carrier on air in band 4 (1700/2100)

And in case you wonder how you can find LTE transmissions without special equipment, have a look here. It's not quite straightforward to map transmissions to network operators but not impossible with a bit of help of Wikipedia (see here and here) and 3GPPs band plan that shows uplink and downlink frequencies of the different bands.

Netflix, HTML5, Linux and What Else Made Me Sign-Up

There we go, I signed up to Netflix after being on the lookout for years for a video on demand service that would fit my needs! Here's the story:

A video on demand service has to run on Linux for me because that's my OS of choice for all my computers at home. This, together with a 4 year old media center PC, disqualified all VoD services so far because all of them either require the Adobe Flash player plugin or, even worse, Microsoft's Silverlight. I tried Amazon's video service for a while but the Linux version of the Adobe Flash player sooner or later crashes during video playback. I also tried the Linux wrapper for Silverlight, which seems to work fine on newer PCs. On my 4 year old media center PC, however, I never got a smooth video playback that way.

And then Netflix came around the corner with HTML5 video playback support. Unfortunately, but hardly surprising, it uses an HTML5 extension to play back DRM protected media. Yes, I know that's evil from an open source point of view and Mozilla has rejected to put it into their browser so far. On the other hand, however, Google has decided to support this extension in their Chrome browser. I'm about as far away from liking Chrome than being a Microsoft or Adobe fan boy but I can live with a Chrome installation on my Linux system for a specific purpose while continuing to use Firefox for everything but Netflix.

Up until last week a tweak was required to make Netflix use Chrome on Linux, i.e. the user agent needed to be tweaked. I was tempted to install the plugin for the purpose but didn't come around to it before Netflix announced that they now support Chrome on Linux as well now. Having heard that I signed up immediately to give it a try and the video is as smooth on my somewhat older machine than as I could ask for. Well done!

And the second issue I've had with most VoD services, in particular the ones offered by German companies, is that their support of the original English audio of the content is minimal at best. Not so with Netflix, everything I've watched so far has English audio.

So as you can imagine, I was busy over the weekend to check things out. Netflix says on the configuration settings page that full-HD video streaming requires a  bandwidth of up to 6.5 Mbit/s. In practice I've observed that the content I've watched was streamed at around 3.5 Mbit/s or around 1.5 GB per hour on the PC and around 1.5 Mbit/s or around 650 MB per hour via the Netflix App on my smartphone. Let's see how long Netflix can keep me entertained and what kind of impact that will have on my monthly data consumption over my VDSL line at home. So far, my monthly usage has been around 35 GB which already includes a fair amount of audio and video streaming.

And the closing thought for today: Netflix also seems to offer some content in 4k resolution. No I don't have a screen for such high resolution content but I'm mentioning this because of the staggering bandwidth required for that resolution. On the settings page, Netflix says that 4k video requires up to 7.5 GB per hour, i.e. the video streams at over 16 Mbit/s. Now double that for two screens in the household… And now assume two times 2 hours consumption a day which would result in a monthly data usage for Netflix alone of 900 GB. Yes, I know, that's not going to be tomorrow and not for everyone but it shows were we are headed.

How Fast Is An OpenVPN Server on A Raspberry Pi And A Banana Pi?

I've been running an OpenVPN server at home to protect my data traffic for quite some years now, first on an WRT54 Wi-Fi Router and later on a Raspberry Pi thanks to a great article over on ReadWrite. The solution I had so far has been limited to a maximum throughput of 5 Mbit/s as that was the uplink speed of my VDSL line at home. As we have a fiber FTTH line in Paris now with a maximum speed of 264 Mbit/s in the downlink direction and 48 Mbit/s in the uplink direction it was time to relocate my VPN service to that location to lift the 5 Mbit/s limit. It was really time for that as I easily surpass such speeds today while connected via UMTS and LTE. But it turns out the next road block is just around the corner.

And that next road block is the Raspberry Pi. Encryption and Decryption data must be quite computing intensive so the Raspberry Pi's processor is fully loaded at an encrypted line rate of around 10 Mbit/s. Twice as much as I had before but still far from what the fiber line offers. So I decided to move to a Banana Pi with it's much stronger processor. At around €40 without casing it only costs 10 euros more than a Raspberry Pi. And as it turns out the processor can shuffle encrypted OpenVPN data through the Ethernet Interface at a rate of 30 Mbit/s. Not quite the line rate of the FTTH connection but it's not too bad either and to go further I would have to put an Intel NUC or other high power CPU device in place which would cost much more. So the price / value balance of the Banana Pi seems quite right to me, at least for now.

Next on my list of things to do is to make the Banana Pi work as a OpenVPN Client Gateway and Wi-Fi Access Point. I use a Raspberry Pi today today to bundle the data traffic of all my devices while I'm traveling through a single VPN tunnel to my VPN Gateway which is, not surprisingly, also limited to 10 Mbit/s. All the scripts for configuring a Raspberry Pi are on GitHub but I'm running Ubuntu on the BananaPi so some of the things need to be tweaked.

 

October 2014: Three Networks Left In Germany

A little note today so I can search more easily for it in the future that October 2014 was the month when the EU sanctioned merger of Telefonica O2 Germany and KPN's E-Plus was finalized from a contractual point of view. While the networks of the formerly two independent companies are still separate, this effectively reduces the competition from 4 mobile network infrastructures to 3 once O2 starts integrating the two separate networks.

Yes, the EU has put some conditions in place to 'ensure' (in their opinion) continued competition. I doubt, however, that even the most important one in the form of putting a capacity reseller in place for a certain percentage of the united O2 and E-Plus will do much in this regard. The business practices of the company that got the contract in the past has, well lets says, been somewhat unusual and even apart from that, it still doesn't make up for the fact that infrastructure competition is seriously hampered by this move.

So no, I'm not happy about this decision at all and I really hope that I will be proven wrong. But it's only hope as up to today, there aren't too many examples in Europe, if any, where competition with three incumbent network infrastructures in a country have led to a healthy price level and adequate coverage.

So let's see how the mobile landscape will look like in Germany 2 years and 5 years down the road from now.

My First Prepaid LTE Experience

It's taken a long time and still today, at least in Germany, most network operators reserve their LTE networks for their postpaid customers. In recent months, this has somewhat changed in Germany with the fourth network operator also starting LTE operations and allowing their prepaid customers access from day one. These days their LTE network is also available in Cologne so I had to take a closer look, of course with a prepaid SIM and a €2 per 24 hours data option that gave me up to 1 GB of unthrottled data.

Data rates I could achieve were not stellar but not really bad either. Under very good signal conditions I got close to 30 Mbit/s in the downlink direction and about 10 Mbit/s in the uplink direction. Closer examination revealed that they are using a 10 MHz carrier in the 1800 MHz band which should allow, under very ideal conditions up to 75 Mibt/s in the downlink direction (have a look here if you'd like to know how you can find out which band and bandwidth your LTE network operators is using). But no matter what I did and where I went in the city, the 30 Mbit/s was the magical limit. I don't think the air interface is the limit, the bottleneck must be somewhere else. Under other circumstances I would probably be ecstatic about such speeds but with data rates of 100 Mbit/s+ other operators achieve easily, the 30 Mbit/s pale in comparison.

In a recent network test I reported on, CS-Fallback Voice Call establishment times of that network operator were reported to be pretty bad. I can't confirm this, however, so perhaps they have changed something in their network in the meantime. What's a bit unfortunate, however, is that after a voice call the mobile stays in 2G or 3G a long time before returning to LTE. Other network operators are more advanced and redirect their mobiles back to LTE after the call. That makes for a much better experience. Also, I noticed that there's a 2-3 seconds interruption in the data traffic while switching from UMTS and LTE. That means that they must still be using a rather crude LTE Release with Redirect to UMTS procedure rather than a much smoother PS handover.

While the above is perhaps still excusable, there's one thing they should have a look at quickly: Whenever the mobile switches from 2G or 3G back to LTE the PDP context is lost. In other words, I always get a new IP address when that happens which kills, for example, my VPN tunnel every time it happens. Quite nasty and that's definitely a network bug. Please fix!

In summary the network speed is not stellar compared to what others offer today and some quirks in the network still have to be fixed. On the other hand, however, you can pick up a prepaid SIM in a supermarket and get LTE connectivity without a contract.

Affordable Global Internet Access Roaming Becoming A Reality

Accessing the Internet from a mobile phone or tethering a PC over it while traveling all over the world has been possible for many years. Unfortunately, it was also prohibitively expensive. A solution to the problem was to use local SIM cards but getting them has and still often is a hassle. 2014, however, will have been the year when all that has changed, at least for some of us, fortunately including me. And here's why:

New in 2014: EU Data Roaming For A Few Euros A Month

Earlier this year I reported about the new Euro-Roaming offer of my network operator that lets me use my data bucket that is included in my monthly subscription in all EU countries without any extra charges for 5 Euros extra per month. One price for all countries. Perfect, my Internet access problem is solved, and I no longer need local SIM cards except in really exceptional circumstances.

New in 2014: Global Roaming Prices Reach Affordable Levels

But the EU isn't the world and I also travel a lot to Asia and the US. Again, new roaming prices of my home network operator for global destinations completely changes the game. Instead of 20 euros a day for only a few megabytes, the latest offer for any destination is around 12 Euros per week for a 150 MB bucket. If the data is used up sooner, another bucket can be bought instantly via a landing page. 150 MB is not much by today's standards and I had to buy several packages during a recent trip to China to keep me connected, but compared to previous prices this is heaven and totally usable.

New in 2014: Fast Networks And LTE Roaming

When I visited countries such as China in previous years I always noticed how slow even 3G connectivity was. While it could have been the local network I suspect that connectivity between the visited network and my home network was rather underdesigned. Again, when I was recently in China, 3G connectivity was fast and totally usable. I'm delighted! Also, 2014 is the year when LTE roaming agreements finally started to fall in place. Over the past months I've roamed on foreign LTE networks in quite a number of countries and I've achieved data rates of well of 20 Mbit/s. Not that 3G networks are slow but seeing that LTE indicator in the status bar is still something special and promises fast data rates.

New in 2014: Viginti Band LTE Phones That Also Include 5-6 UMTS Bands

While LTE roaming in Europe for European customers is not a problem from a mobile device point of view, getting LTE connectivity in other parts of the world has been another matter altogether so far as North America and China use different UMTS and LTE bands. 5-6 band UMTS and LTE devices have been available for a while in Europe but these unfortunately did not include bands for other regions. But again, things have changed dramatically for the better. One popular smartphone now boasts support of 20 (!) LTE bands and 6 UMTS bands. This includes all major LTE and UMTS bands used in Europe, North America and even the TD-LTE bands used in China. That's especially good news for global travelers no matter where they come from because true Global UMTS and LTE roaming has now become a reality. I'm more than delighted!

I've been using mobile Internet access while traveling for pretty much a decade now. 2014, however, has brought about an as dramatic a change of my usage behavior as the introduction of local prepaid SIM cards for mobile Internet access had many years ago.