First Carrier in Germany Starts LTE-Advanced Carrier Aggregation with 300 Mbit/s

In a number of European countries and elsewhere on the planet, a number of network operators have rolled out LTE-Advanced Carrier Aggregation in recent months. Most of them bundle a combination of 10, 15 or 20 MHz carriers. In Germany, the first mobile network operator has now also started Carrier Aggregation and has gone straight to the maximum that is possible today: Two full 20 MHz carriers for a theoretical top speed of 300 Mbit/s with LTE Category 6 devices.

Nicely enough, the carrier has also enhanced it's publicly available network coverage map to show where 2×20 MHz CA is available (click on the LTE 300 Mbit/s checkbox). When you are on the nationwide zoom level there's not much to be seen. But when zooming into the map over big cities such as Cologne, Düsseldorf, Berlin and many others, you can see that these are quite well covered already. I'm looking forward to the first reports by the tech press how much can be achieved in practice.

Power Cycling My Backup Router With My Raspi

I am quite unhappy to admit it but when it comes to reliability, my LTE router that I use for backup connectivity for my home cloud comes nowhere close to my VDSL router. Every week or so after the daily power reset the router fails to connect to the network without any apparent reason. Sometimes it connects but the user plane is broken. Packets are still going out but my SSH tunnels do not come up while the authentication log on the other side shows strange error messages. The only way to get things back on track is to reboot the LTE router or to power cycle it. Rebooting the router can only be done from inside the network so when I'm traveling and the network needs to fall back to the backup link, there's nothing I can do should that fail.

When I recently stumbled over the 'EnerGenie EG-PM2' power strip that has switchable power sockets via a built in USB interface I knew the time had come to do something about this. At around 30 euros it's quite affordable as well and the software required on the Raspberry Pi, Ubuntu or Debian side are open source and already part of the software repository. A simple 'sudo apt-get install sispmctl' executed in a shell and the setup is up and running without further configuration. Individual power sockets are switched off and on via the following shell commands:

sudo sispmctl -f 3  #switches power socket 3 off

sudo sispmctl -o 3 #switches power socket 3 on

It couldn't be easier and I had the basic setup up and running in 2 Minutes. In a next step I wrote a short Python script that checks if Internet connectivity is available via the backup link and if not, power cycles the LTE router. I noticed that there's a Python wrapper for 'sispmctl' but it's also possible to just execute a command in a shell from Python as follows:

import subprocess
result_on  = subprocess.call ("sudo sispmctl -o 4", shell=True)

Perhaps not as elegant as using the wrapper but it works and the result variable can be checked for problems such as the USB link to the power strip being broken.

LTE Carrier Aggregation: Intra-Band Non-Contiguous

Apart from the LTE Carrier Aggregation used in practice today that combines channels in different frequency bands for higher throughput there are also CA combinations that combine channels in the same frequency band that are not next to each other. Such combinations are called Intra-Band Non-Contiguous. Quite a mouthful. Now what would they be good for?

I don't have any practical examples but I think such combinations would make sense for network operators that have either received several chunks of spectrum in the same band over time or they have acquired additional spectrum, e.g. through a merger with another network operator.

When looking at this carrier aggregation table such combinations are foreseen for the US, Europe and China. In the US the non contiguous combination is foreseen in band 4 (1900/2100 MHz) which quite a lot of carriers seem to use. In Europe, band 3 (1800 MHz) and band 7 (2600 MHz) have such combinations defined as well. I wonder which carriers might want to use them in the near future. Any idea?

G.Fast – A Technology To Prevent A Fiber Monopoly?

Fiber connectivity is moving closer and closer to people's homes. Some, like me in Paris, are fortunate enough to get a fiber line right into the apartment and enjoy speeds of well beyond 250 Mbit/s in downlink and 50 Mbit/s in uplink. That's something the good old telephone line can't do today by a wide margin. Even cable modems that use the TV cable can't match those speeds at the moment, particularly in the uplink direction which is a must for hosting services at home. In a previous post I have thus speculated that the network operator that is first willing to deploy a real fiber to people's homes is likely to become the next monopoly operator in an area. That's not good news for consumers in the long run. Any hope the good old copper line might catch up?

At the moment, VDSL2 Vectoring is the best there is for phone lines. With the technology, speeds of 100 Mbit/s in the downlink and 40 Mbit/s in the uplink are possible. Easy to beat for fiber. G.Fast has promise to be the next step and offers theoretical top speeds of 500 Mbit/s to 1 Gbit/s. Have a look at this Wikipedia entry for further details. The problem is, however, that such high speeds are only possible for cable lengths shorter than 100m. A lot of outdoor DSLAM locations used for VDSL2 and VDSL2 Vectoring today are not that close to subscriber's homes which means earthworks are also necessary to replace the copper cable from the outdoor cabinets that are used by VDSL today with a fiber strand into buildings. But at least it removes the requirement to deploy fiber inside buildings.

When copper cables get longer, speeds drop quickly. At copper cable lengths of 200 meters, top speeds already drop down to 200 Mbit/s. 250 meters and you are down to 150 Mbit/s. Again, fiber already tops those numbers today easily.

So as fast as G.fast sounds, to get the promised speeds, that fiber needs to go to the building and that requires unloved earth works. And that might bring us right back to the fiber monopoly. So I remain skeptical.

My Exodus from Truecrypt to DM-Crypt Is Complete

Back in August I wrote that I had started my exodus from Truecrypt as the software is no longer supported by its authors. Over the months I've experimented a lot with dm-crypt on Linux to see if it is a workable alternative for me. As it turns out, dm-crypt works great and here's how my migration went. It's a bit of a long story but since I did a couple of other things along the way that are typical maintenance tasks that have to be done when running out of disk space, I thought it's perhaps a story worthwhile to be told to pass on the tips and tricks I picked up along the way from different sources.

Migrating My Backup Drives To DM-Crypt

At first I migrated my backup hard drives from Truecrypt to dm-crypt while I stayed with Truecrypt on my PC. Instead of using a dm-crypt container file I chose to create a dm-encrypted partition on my backup drives with Ubuntu's “Disk Utility”. Ubuntu automatically recognizes the dm-crypt partition when I connect the backup hard drives to the PC and asks for the password. Pretty much foolproof.

Running Out Of Disk Space Faster Than I Thought

The next step came when my 500 GB SSD drive was close to becoming full and I had to get a bigger SSD. Fortunately prices have come down quite a bit over the last year once again and a 1 TB Samsung 840 EVO was to be had for little over 300 euros. I had some time to experiment with different migration options as the 840 EVO had a firmware bug that decreased file read speeds over time so I chose to wait with my migration until Samsung had a fix.

DM-Crypt Partitions Can Be Mounted During the Boot Process

A major positive surprise during those trial runs was that even my somewhat older Ubuntu 12.04 LTS recognizes the dm-crypt partition during the boot process when configured in the “fstab” and “crypttab” configuration files and asks for the password during the boot process before the user login screen is shown. Perfect!

Here's how my “/etc/cryptab” entry looks like:

# create a /dev/mapper device for the encrypted drive
data   /dev/sda3     none luks,discard

And here's how my “/etc/fstab” entry looks like:

# /media/data LUKS
/dev/mapper/data /media/data ext4 discard,rw 0 0

Sins Of The Past – Hard Disk Migration The Hard Way

When I initially upgraded my from a 350 GB hard drive to a 500 GB SSD I used Clonezilla to make a 1:1 copy of my hard drive to the SSD and used the extra space for a separate partition. After all, I couldn't imagine that I would run out of disk space on the initial 350 GB partition anytime soon. That was a bad mistake as it turned out pretty quickly, as the virtual machine images on that partition soon grew beyond 200 GB. As a consequence I moved my Truecrypt container file to the spare partition but that only delayed the inevitable for a couple of months. In the end I was stuck with about 50 GB left on the primary partition and 100 GB on the spare partition, with the virtual machine images threatening to eat up the remaining space in the next months.

As a consequence, I decided that once I moved to a 1 TB SSD, I would change my partitions and migrate to a classic separation of the OS in a small system partition and a large user data partition. I left the system partition unencrypted as the temp directory is in memory, the swap partition is a separately encrypted partition anyway and the default user directories are file system encrypted. In other words, I decided to only encrypt the second partition with dm-crypt in which I would store the bulk of my user data and to which I would link from my home directory.

Advantages of a Non-Encrypted System Partition

There are a couple of advantages of a non-encrypted system partition. The first one is that in case something goes wrong and the notebook refuses to boot, classic tools can be used to repair the installation. The second advantage is that Clonezilla can back up the system partition very quickly because it can see the file system and hence only needs to read and compress the sectors of the partition that are filled with data. In practice my system partition contains around 20 GB of data which Clonezilla can copy in a couple of minutes even on my relatively slow Intel i3 based notebook. If I used dm-crypt for the system partition, Clonezilla would have to back up each and every sector of the 120 GB partition.

Minimum Downtime Considerations

The next exodus challenge was how to migrate to the 1 TB SSD with minimum downtime. As this is quite a time intensive process during which I can't use the notebook I played with several options. The first one I tried was to use Clonezilla to only copy over the 350 GB primary partition to the new SSD and then shrink it down to around 120 GB. This works quite well but it requires shrinking the partition before recreating the swap partition and then manually reinstalling the boot sector.  Reinstalling the boot sector is a bit tricky if done manually but the Boot-Repair-Disk project pretty much automates the process. The advantage of only copying one partition obviously is that it speeds things up quite a bit. In the end I chose another option when the time came and that was to use Clonezilla to make a 1:1 copy of my 500 GB SSD including all partitions to the 1 TB SSD. This saved me the hassle of recreating the boot sector and I had the time for it anyway as I ran the job over night.

Tweaking, Recreating and Encrypting Partitions On The New SSD

Once that was done I had a fully functional image on the 1 TB SSD with a working boot sector and to continue the work, I put it into another notebook. This way I could finish the migration while I was still being able to work on my main notebook. At this point, I deleted all data on my spare partition on the 1 TB SSD and also the virtual machine images on the primary partition. This left about 20 GB on the system partition. I then booted from a live Ubuntu system from a CD and used “gparted” to decrease the system partition from 350 GB down to 120 GB and to recreate a Linux swap partition right after the new and smaller system partition. Like the 1:1 Clonezilla copy process eariler, this takes quite a while. This is not a problem, however, as I could still work on the 'old' SSD and even change data there as migrating the data would only come later. Once the new drive was repartitioned I rebooted into the system on my spare notebook and used Ubuntu's “Disk Utility” to create the dm-crypt user partition in the 880 GB of remaining space on the SSD.

Auto-Mounting The Encrypted Partition and Filling It With Data

As described above it's possible to auto-mount the encrypted partition during the boot process so the partition is available before user login. As in my previous installation where I mapped the “Documents” folder and a couple of other directories to the Truecrypt volume, I removed the logical links for that and recreated new ones that pointed to empty directories on the new dm-crypt volume. And once that was done it was time to migrate all my data including the virtual machine images to the new SSD. I did this by backing up all my data to one of my cold-storage backup disks as usual and restoring it from there to the new SSD. The backup only takes a couple of minutes as LuckyBackup is pretty efficient by only coping new and altered files. To keep the downtime to a minimum I swapped the SSDs after I made the copy to the backup drives and started working with the 1 TB SSD in my production notebook. Obviously I restored the email directory and the most important virtual machine images first so I could continue working with those while the rest of the data was copied over in the background.

Thunderbird Is A Special Bird

In my Truecrypt installation I used a logical link for the mail directory so I could have it on the Truecrypt volume while the rest of the Thunderbird installation remained in the user directory. At first I thought it was only necessary to replace the local link to the mail folder but it turned out that Thunderbird also keeps the full path in its settings and doesn't care much about logical links. Fortunately the full paths can be changed in "Preferences – Mail Setup".

Summary

There we go, this is the story of my migration away from Truecrypt, upgrading to bigger SSD and cleaning up my installation at the same time. I'm glad I could try all things on a separate notebook first without Ubuntu complaining or making things difficult when it detected different hardware as other operating systems perhaps would have. Quite a number of steps ended up in trial and error sessions that would have resulted in a lot of stress if I hadn't known about them during the real migration. It's been a lot of work but it was worth it!

A 2 Amp USB Charger Is Great – If A Device Makes Use Of It

The smallest 2 ampere USB charger I've come accross so far is from Samsung and my Galaxy S4 makes almost full use of its capabilities by drawing 1.6 amperes when the battery is almost empty. In case you are wondering how I know, have a look at the measurement tool I used for measuring the power consumption of a Raspberry Pi. What I was quite surprised about, however, was that all other devices I tried it with, including a new iPhone 6, only charge at 1 ampere at most. I wondered why that is so I dug a bit deeper. Here's a summary of what I've found:

One reason for not drawing more than 1A out of the charger is that some devices simply aren't capable to charge at higher rates, no matter which charger is used. The other reason is that USB charging is only standardized up to 900 mA and everything above is proprietary. Here's how it works:

  • When a device is first connected to USB it may only draw 100 mA until it knows what kind of power source is behind the cable.
  • If it's a PC or a hub, the device can request to get more power and, if granted, may draw up to 450 mA out of us USB2 connector. And that's as much as my S4 will draw out of the USB connector of my PC.
  • USB3 connectors can supply up to 900 mA with the same mechanism.
  • Beyond the 450 mA USB2 / 900 mA USB3, the USB Charging Specification v1.1 that was published in 2007 defines two types of charging ports. The first is called Charging Downstream Port (CDP). When a device recognizes such a USB2 port it can draw up to 900 mA of power while still transferring data.
  • The second type of USB charging port defined by v1.1 of the spec is the Dedicated Charging Port (DCP). No data transfers are possible on such a port but it can deliver a current between 500 mA and 1.5A. On such a port the D+ and D- data lines are shortened over a 200 Ohm resistor so the device can find out that it's not connected to a USB data port. Further, a device recognizes how much current it can draw out of such a port by monitoring the voltage drop when current consumption is increased.
  • With v1.2 of the charging specification, published in September 2010, a Dedicated Charging Port may supply up to 5A of current.

And that's as far as the standardized solutions go. In addition there are also some Apple and Samsung proprietary solutions to indicate the maximum current their chargers can supply:

  • Apple 2.1 Ampere
  • Apple 2.4 Ampere
  • Samsung 2.4 Ampere

There we go, quite a complicated state of affairs. No wonder, only one device I have makes use of the potential of my 2A travel charger. For more information, have a look at the USB article on Wikipedia that also contains links to the specifications and the external blog posts here, here and here.

Raising the Shields – Part 14: Skype Jumps Into My VPN Tunnel Despite The NAT

According to public wisdom, the days when Skype was secure are long gone and I use my own instant messaging server to communicate securely when it comes to text messaging. When it comes to video calling, however, there are few alternatives at the moment that are as universal, as easy to use and with a similar video quality. Under normal circumstances Skype video calls are peer to peer, i.e. there is no central instance on which the voice and video packets can be intercepted. That's a good thing and Skype has many ways to find out if a direct link between two Skype clients can be established.

And here's a really interesting scenario: Skype is even able to find out that a direct link can be established through my VPN link I usually establish with my VPN server at home when I'm traveling and a Skype client on a PC at home despite a NAT between the VPN link and the local home network. That means that when I'm traveling, Skype packets are routed directly between the Skype client running on a PC at home and the Skype client on my notebook that is connected to my home network over a VPN tunnel. At no time do such Skype packets traverse a link on the Internet outside the VPN tunnel. In other words, potential attackers that can passively collect packets between where I am and my home network are unable to decrypt my Skype traffic, should they have such an ability.

Sure, Skype and anyone who has access to Skype can still find out if and when I'm online, probably even where I'm online and when and to whom I make calls. The call content, however, can't be intercepted without me noticing, i.e. when the traffic suddenly is not peer-to-peer through the VPN tunnel anymore. Far from perfect, but something to work with for the moment.

Opera Turbo Turned Off After 30 Seconds

Opera and it's server side compression have helped me a lot over the years to overcome issues like slow connections or strange operator proxys blocking access to websites such as the strange case I came across back in 2008. Fortunately, networks have become faster and other strange effects caused by meddling with data have also receded so I usually use the full Opera browser these days instead of Opera Mini or the Opera Turbo functionality. But every now and then I end up in a GSM-only place and so far the server side compression always helped. Well, up until now.

When I recently wanted to use Opera Turbo again to browse my favorite websites in a bandwidth starved area it took a long time because all advertisement I can block so conveniently locally with a modified hosts file had to be loaded again. Not only did it take long to load the pages due to the advertisement but splash screens and other intrusive advertisement is just not my cup of cake. So after about 30 seconds I switched off Opera Turbo again and resorted to a non-proxied connection, which was not slower for my favorite pages than using server side compression as all advertisement was stripped out. And not only was it not slower, I also didn't have to put up with splash screen advertisement. So for me the days of using server side compression to speed up my web experience in bandwidth limited areas are definitely over…

Another LTE First For Me: Intercontinental Roaming

I've had quite a couple of LTE and roaming firsts this year and, as I've laid down in this post, 2014 is the year when affordable global Internet roaming finally became a reality. Apart from having used a couple of LTE networks in Europe over the last couple of months I can now also report my first intercontinental LTE experience. When I recently traveled with my German SIM card to the United States, I was greeted by an LTE logo from both the T-Mobile US and AT&T network. Data connectivity was as quick (but I didn't run speed tests  so I can't give a number) and with the 20 bands supported by my mobile device I could actually detect quite a number of LTE networks at the place in Southern California where I stayed for a week:

  • Verizon was active in band 13 (700 MHz)
  • Metro-PCS in Band 4 (1700/2100 MHz)
  • AT&T was available in band 4 (1700/2100 MHz) and Band 17 (700 MHz)
  • Sprint had a carrier on air in band 25 (1900 MHz, FDD) and band 41 (2500 MHz, TDD)
  • T-Mobile US had a carrier on air in band 4 (1700/2100)

And in case you wonder how you can find LTE transmissions without special equipment, have a look here. It's not quite straightforward to map transmissions to network operators but not impossible with a bit of help of Wikipedia (see here and here) and 3GPPs band plan that shows uplink and downlink frequencies of the different bands.

Netflix, HTML5, Linux and What Else Made Me Sign-Up

There we go, I signed up to Netflix after being on the lookout for years for a video on demand service that would fit my needs! Here's the story:

A video on demand service has to run on Linux for me because that's my OS of choice for all my computers at home. This, together with a 4 year old media center PC, disqualified all VoD services so far because all of them either require the Adobe Flash player plugin or, even worse, Microsoft's Silverlight. I tried Amazon's video service for a while but the Linux version of the Adobe Flash player sooner or later crashes during video playback. I also tried the Linux wrapper for Silverlight, which seems to work fine on newer PCs. On my 4 year old media center PC, however, I never got a smooth video playback that way.

And then Netflix came around the corner with HTML5 video playback support. Unfortunately, but hardly surprising, it uses an HTML5 extension to play back DRM protected media. Yes, I know that's evil from an open source point of view and Mozilla has rejected to put it into their browser so far. On the other hand, however, Google has decided to support this extension in their Chrome browser. I'm about as far away from liking Chrome than being a Microsoft or Adobe fan boy but I can live with a Chrome installation on my Linux system for a specific purpose while continuing to use Firefox for everything but Netflix.

Up until last week a tweak was required to make Netflix use Chrome on Linux, i.e. the user agent needed to be tweaked. I was tempted to install the plugin for the purpose but didn't come around to it before Netflix announced that they now support Chrome on Linux as well now. Having heard that I signed up immediately to give it a try and the video is as smooth on my somewhat older machine than as I could ask for. Well done!

And the second issue I've had with most VoD services, in particular the ones offered by German companies, is that their support of the original English audio of the content is minimal at best. Not so with Netflix, everything I've watched so far has English audio.

So as you can imagine, I was busy over the weekend to check things out. Netflix says on the configuration settings page that full-HD video streaming requires a  bandwidth of up to 6.5 Mbit/s. In practice I've observed that the content I've watched was streamed at around 3.5 Mbit/s or around 1.5 GB per hour on the PC and around 1.5 Mbit/s or around 650 MB per hour via the Netflix App on my smartphone. Let's see how long Netflix can keep me entertained and what kind of impact that will have on my monthly data consumption over my VDSL line at home. So far, my monthly usage has been around 35 GB which already includes a fair amount of audio and video streaming.

And the closing thought for today: Netflix also seems to offer some content in 4k resolution. No I don't have a screen for such high resolution content but I'm mentioning this because of the staggering bandwidth required for that resolution. On the settings page, Netflix says that 4k video requires up to 7.5 GB per hour, i.e. the video streams at over 16 Mbit/s. Now double that for two screens in the household… And now assume two times 2 hours consumption a day which would result in a monthly data usage for Netflix alone of 900 GB. Yes, I know, that's not going to be tomorrow and not for everyone but it shows were we are headed.