Fiber Connectivity in Paris – Some Images

A few weeks ago I've reported about my stellar speed experience with fiber connectivity in Paris with downlink and uplink speeds of 264 Mbit/s and 48 Mbit/s respectively. Today, I've got a follow up with a couple of pictures and technical background information.

Fiber and Copper in the Apartment

1 - router and fiber ONTLet's start at the end of the fiber. The first pictures shows two boxes stacked on top of each other. The bigger one below is a standard Wi-Fi access point with router functionality which is connected to the small box via an Ethernet cable. The small box is a fiber to copper converter. The green cable going into the small box is the fiber cable. The small box gets pretty warm so it's save to assume it takes more than the 2.5 Watts of a Raspberry Pi…

2 - ONT close-upThe second picture is a close-up of the of the fiber to copper converter, the Optical Network Terminal (ONT). The Ethernet cable on the left is connected to the bigger Wi-Fi network box shown in the first picture. The optical cable with the green connector on the right goes to the next box in the apartment shown in picture 3. As no power is delivered to this box it must be a passive component that connects the more sturdy optical cable coming into the apartment to the more flexible optical cable with the green connectors.

3 - fiber to fiberAnd that's it as far as the equipment in the apartment is concerned. The fourth picture shows how the optical cable gets into the apartment via a crudely drilled hole that was filled with some glue afterward. Not quite a work of art to say the least.

Yes, it's GPON!

4 - entrySo what kind of fiber technology is used for this line? The model number on the fiber to copper converter on picture 2 (I-010G-Q) gives the first clue that Google translates into a number of interesting links to follow. The most interesting one is to lafibre.info which contains lots of pictures of how the outdoor part of fiber networks are installed in France. The Google search for the model number also led me to a pretty interesting document from Alcatel Lucent which details their Gigabit Passive Optical Network (GPON) components and network setups on 250+ pages. So there we go, the I-010G-Q is part of a GPON installation. 2.4 Gbit/s in the downlink and 1.2 Gbit/s in the uplink direction that is shared between all installations behind one fiber strand that is split close to an apartment building into separate strands, one for each customer.

From an evolution point of view the document's 2010 creation date is also interesting. In other words GPON is well in it's 4th year of deployment now and has come nowhere near capacity issues so far. And it's unlikely to happen anytime soon, i.e. there's no immediate need to beef up the specs to make it even faster. The challenge with GPON rather is, that optical cables need to go into buildings and from there to apartments to deliver speeds in the Gbit/s range. And that certainly comes at a price.

The Next Step In LTE Carrier Aggregation: 3 Bands

The hot LTE topic of 2014 that made it into live networks certainly is Carrier Aggregation (CA). Agreed, there aren't too many devices that support CA at the end of 2014 but that's going to change soon. In the US, quite a number of carriers have deployed 10 + 10 MHz Carrier Aggregation to play catch up with the 20 MHz carriers used in Europe already.  In Europe, network operators will use 10 MHz + 20 MHz aggregations and some even 20  + 20 MHz for a stunning theoretical peak data rate of 300 Mbit/s. So where do we go from here? Obviously, aggregating 3 bands is the next logical step.

And it seems 3GPP is quite prepared for it. Have a look at this page which has an impressive list of all sorts of LTE carrier aggregation combinations and also shows for each in which 3GPP spec version it was introduced in the specification.

For Europe, especially the 3A_7A_20A combination (20 + 20 + 10 MHz) is interesting as there are network operators that have spectrum in each of these bands. Peak data rates with 50 MHz of downlink spectrum, which some network operators actually own, would be 375 Mbit/s.

For North America, there are literally dozens of potential combinations listed. Not sure which ones might actually be used. But I suspect it will be difficult to come up with 50 MHz of total aggregated bandwidth in this region, so Europe will continue to have an edge when it comes to speed.

How To Fix Ubuntu Wi-Fi Tethering Issues With Some Smartphones

Tethering-issuesI use smartphone Wi-Fi tethering every day to connect my notebook to the Internet. This mostly works out of the box. There are, however a tiny number of smartphones with which I have problems While the notebook connects just fine, ping times are very long and erratic as shown in the screenshot on the left and there's almost no data throughput. I took me a long time to figure out what the issue was but at some point I realized that I only had the problems with a few particular devices when my notebook was not connected to the charger. Ah, may of you might say now, then it has something to do with power saving modes!

And indeed it has. Per default, Ubuntu activates power save mode in the Wi-Fi chip when running on battery and deactivates it as soon as the notebook is connected to the mains again. While power save mode slightly increases ping times it otherwise has no negative effects with 99% of the smartphones I try, except for the few it wreaks total havoc on.

Fortunately, there's a simple way to disable power save mode. A simple "sudo iwconfig wlan0 power off" from a shell instantly fixes the problem. The "iwconfig" command without any parameters then shows that power save mode was switched off desite running on battery:

wlan2     IEEE 802.11bgn  ESSID:"martins-i-spot"  
          Mode:Managed  Frequency:2.462 GHz  Access Point: xx:xx  
          Bit Rate=57.8 Mb/s   Tx-Power=16 dBm   
          Retry  long limit:7   RTS thr:off   Fragment thr:off
          Power Management:off
          Link Quality=70/70  Signal level=-38 dBm  
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:0  Invalid misc:90   Missed beacon:0

While this is a good short term fix, Wi-Fi power management is activated again after rebooting or after sleep mode. To permanently disable Wi-Fi power save mode, a script that contains the command can be added in the power management configuration directory:

cd /etc/pm/power.d/
sudo touch wireless
sudo chmod 755 wireless
sudo nano wireless

And then paste the following two lines inside:

#!/bin/bash
/sbin/iwconfig wlan0 power off

That's it. Just one more thing perhaps: Use "ifconfig" to check if your Wi-Fi adapter is "wlan0" or if the OS has at some point assigned another name to it and adapt the command accordingly.

 

Perhaps it’s time for 3G to LTE Handovers Now?

While most networks still use the Radio Bearer "Release with Rediredirect" method to switch from LTE to 3G when necessary, some networks have started using a real LTE to 3G packet handover procedure that significantly reduces the outage time of the data bearer. So far so good. The problem with this is that once a device is on the 3G layer there's no way for it today to get back to LTE until no data is transmitted anymore and the connection is put into Idle or Cell/URA-PCH state. This is especially problematic if a mobile device is used via tethering in combination with notebooks and other devices that send data all time as the switch back to LTE then never happens. Perhaps the time has come now to change this?

Before I go on explaining why the time might have come for this to change it's perhaps a good idea to have a quick look at the problem of a 3G to LTE handover. While active in UMTS, the mobile's transceiver is active all the time so it can't look on other channels and bands for a better radio technology. The only way to do this is for the network to schedule transmission gaps (the famous UMTS compressed mode) and to instruct the mobile device to look for LTE cells during those transmission and reception gaps. Obviously such a radio reconfiguration has a significant drawback: The data rate goes down. This is perhaps ok if an LTE signal is found but not very desirable if there is no LTE coverage to be found for some time. This is the reason why network operators have so far shied away from it. After all, 3G is quite a good technology for Internet access as well.

These days, LTE has become a lot better than UMTS, however, and when I look at network coverage maps there aren't a lot of places in many networks where 3G is deployed but LTE is not. In other words, if the unfortunate event occurs and the mobile is sent to 3G due to a lack of LTE network coverage, chances are very high that the user will be back in LTE coverage quite quickly. Therefore I think that with the LTE network coverage there is today it would make sense to think about 3G to LTE handovers.

P.S.: And it's not that changes from a slower RAT to a faster RAT while transferring data is unknown. This works great from GSM to UMTS for example. As GSM/GPRS uses timeslots, a mobile device has ample time even without network support to search for UMTS even while data is transferred. The same mechanism also works to switch from GPRS to LTE during a data transfer but so far only few mobile devices have implemented this. Fortunately first devices are now showing up that can do GPRS to LTE reselections during packet data transfer. So when I'm connected while being in a train I at least end up on LTE again if things get so bad for some time that my connectivity ends up on the GSM layer.

First Carrier in Germany Starts LTE-Advanced Carrier Aggregation with 300 Mbit/s

In a number of European countries and elsewhere on the planet, a number of network operators have rolled out LTE-Advanced Carrier Aggregation in recent months. Most of them bundle a combination of 10, 15 or 20 MHz carriers. In Germany, the first mobile network operator has now also started Carrier Aggregation and has gone straight to the maximum that is possible today: Two full 20 MHz carriers for a theoretical top speed of 300 Mbit/s with LTE Category 6 devices.

Nicely enough, the carrier has also enhanced it's publicly available network coverage map to show where 2×20 MHz CA is available (click on the LTE 300 Mbit/s checkbox). When you are on the nationwide zoom level there's not much to be seen. But when zooming into the map over big cities such as Cologne, Düsseldorf, Berlin and many others, you can see that these are quite well covered already. I'm looking forward to the first reports by the tech press how much can be achieved in practice.

Power Cycling My Backup Router With My Raspi

I am quite unhappy to admit it but when it comes to reliability, my LTE router that I use for backup connectivity for my home cloud comes nowhere close to my VDSL router. Every week or so after the daily power reset the router fails to connect to the network without any apparent reason. Sometimes it connects but the user plane is broken. Packets are still going out but my SSH tunnels do not come up while the authentication log on the other side shows strange error messages. The only way to get things back on track is to reboot the LTE router or to power cycle it. Rebooting the router can only be done from inside the network so when I'm traveling and the network needs to fall back to the backup link, there's nothing I can do should that fail.

When I recently stumbled over the 'EnerGenie EG-PM2' power strip that has switchable power sockets via a built in USB interface I knew the time had come to do something about this. At around 30 euros it's quite affordable as well and the software required on the Raspberry Pi, Ubuntu or Debian side are open source and already part of the software repository. A simple 'sudo apt-get install sispmctl' executed in a shell and the setup is up and running without further configuration. Individual power sockets are switched off and on via the following shell commands:

sudo sispmctl -f 3  #switches power socket 3 off

sudo sispmctl -o 3 #switches power socket 3 on

It couldn't be easier and I had the basic setup up and running in 2 Minutes. In a next step I wrote a short Python script that checks if Internet connectivity is available via the backup link and if not, power cycles the LTE router. I noticed that there's a Python wrapper for 'sispmctl' but it's also possible to just execute a command in a shell from Python as follows:

import subprocess
result_on  = subprocess.call ("sudo sispmctl -o 4", shell=True)

Perhaps not as elegant as using the wrapper but it works and the result variable can be checked for problems such as the USB link to the power strip being broken.

LTE Carrier Aggregation: Intra-Band Non-Contiguous

Apart from the LTE Carrier Aggregation used in practice today that combines channels in different frequency bands for higher throughput there are also CA combinations that combine channels in the same frequency band that are not next to each other. Such combinations are called Intra-Band Non-Contiguous. Quite a mouthful. Now what would they be good for?

I don't have any practical examples but I think such combinations would make sense for network operators that have either received several chunks of spectrum in the same band over time or they have acquired additional spectrum, e.g. through a merger with another network operator.

When looking at this carrier aggregation table such combinations are foreseen for the US, Europe and China. In the US the non contiguous combination is foreseen in band 4 (1900/2100 MHz) which quite a lot of carriers seem to use. In Europe, band 3 (1800 MHz) and band 7 (2600 MHz) have such combinations defined as well. I wonder which carriers might want to use them in the near future. Any idea?

G.Fast – A Technology To Prevent A Fiber Monopoly?

Fiber connectivity is moving closer and closer to people's homes. Some, like me in Paris, are fortunate enough to get a fiber line right into the apartment and enjoy speeds of well beyond 250 Mbit/s in downlink and 50 Mbit/s in uplink. That's something the good old telephone line can't do today by a wide margin. Even cable modems that use the TV cable can't match those speeds at the moment, particularly in the uplink direction which is a must for hosting services at home. In a previous post I have thus speculated that the network operator that is first willing to deploy a real fiber to people's homes is likely to become the next monopoly operator in an area. That's not good news for consumers in the long run. Any hope the good old copper line might catch up?

At the moment, VDSL2 Vectoring is the best there is for phone lines. With the technology, speeds of 100 Mbit/s in the downlink and 40 Mbit/s in the uplink are possible. Easy to beat for fiber. G.Fast has promise to be the next step and offers theoretical top speeds of 500 Mbit/s to 1 Gbit/s. Have a look at this Wikipedia entry for further details. The problem is, however, that such high speeds are only possible for cable lengths shorter than 100m. A lot of outdoor DSLAM locations used for VDSL2 and VDSL2 Vectoring today are not that close to subscriber's homes which means earthworks are also necessary to replace the copper cable from the outdoor cabinets that are used by VDSL today with a fiber strand into buildings. But at least it removes the requirement to deploy fiber inside buildings.

When copper cables get longer, speeds drop quickly. At copper cable lengths of 200 meters, top speeds already drop down to 200 Mbit/s. 250 meters and you are down to 150 Mbit/s. Again, fiber already tops those numbers today easily.

So as fast as G.fast sounds, to get the promised speeds, that fiber needs to go to the building and that requires unloved earth works. And that might bring us right back to the fiber monopoly. So I remain skeptical.

My Exodus from Truecrypt to DM-Crypt Is Complete

Back in August I wrote that I had started my exodus from Truecrypt as the software is no longer supported by its authors. Over the months I've experimented a lot with dm-crypt on Linux to see if it is a workable alternative for me. As it turns out, dm-crypt works great and here's how my migration went. It's a bit of a long story but since I did a couple of other things along the way that are typical maintenance tasks that have to be done when running out of disk space, I thought it's perhaps a story worthwhile to be told to pass on the tips and tricks I picked up along the way from different sources.

Migrating My Backup Drives To DM-Crypt

At first I migrated my backup hard drives from Truecrypt to dm-crypt while I stayed with Truecrypt on my PC. Instead of using a dm-crypt container file I chose to create a dm-encrypted partition on my backup drives with Ubuntu's “Disk Utility”. Ubuntu automatically recognizes the dm-crypt partition when I connect the backup hard drives to the PC and asks for the password. Pretty much foolproof.

Running Out Of Disk Space Faster Than I Thought

The next step came when my 500 GB SSD drive was close to becoming full and I had to get a bigger SSD. Fortunately prices have come down quite a bit over the last year once again and a 1 TB Samsung 840 EVO was to be had for little over 300 euros. I had some time to experiment with different migration options as the 840 EVO had a firmware bug that decreased file read speeds over time so I chose to wait with my migration until Samsung had a fix.

DM-Crypt Partitions Can Be Mounted During the Boot Process

A major positive surprise during those trial runs was that even my somewhat older Ubuntu 12.04 LTS recognizes the dm-crypt partition during the boot process when configured in the “fstab” and “crypttab” configuration files and asks for the password during the boot process before the user login screen is shown. Perfect!

Here's how my “/etc/cryptab” entry looks like:

# create a /dev/mapper device for the encrypted drive
data   /dev/sda3     none luks,discard

And here's how my “/etc/fstab” entry looks like:

# /media/data LUKS
/dev/mapper/data /media/data ext4 discard,rw 0 0

Sins Of The Past – Hard Disk Migration The Hard Way

When I initially upgraded my from a 350 GB hard drive to a 500 GB SSD I used Clonezilla to make a 1:1 copy of my hard drive to the SSD and used the extra space for a separate partition. After all, I couldn't imagine that I would run out of disk space on the initial 350 GB partition anytime soon. That was a bad mistake as it turned out pretty quickly, as the virtual machine images on that partition soon grew beyond 200 GB. As a consequence I moved my Truecrypt container file to the spare partition but that only delayed the inevitable for a couple of months. In the end I was stuck with about 50 GB left on the primary partition and 100 GB on the spare partition, with the virtual machine images threatening to eat up the remaining space in the next months.

As a consequence, I decided that once I moved to a 1 TB SSD, I would change my partitions and migrate to a classic separation of the OS in a small system partition and a large user data partition. I left the system partition unencrypted as the temp directory is in memory, the swap partition is a separately encrypted partition anyway and the default user directories are file system encrypted. In other words, I decided to only encrypt the second partition with dm-crypt in which I would store the bulk of my user data and to which I would link from my home directory.

Advantages of a Non-Encrypted System Partition

There are a couple of advantages of a non-encrypted system partition. The first one is that in case something goes wrong and the notebook refuses to boot, classic tools can be used to repair the installation. The second advantage is that Clonezilla can back up the system partition very quickly because it can see the file system and hence only needs to read and compress the sectors of the partition that are filled with data. In practice my system partition contains around 20 GB of data which Clonezilla can copy in a couple of minutes even on my relatively slow Intel i3 based notebook. If I used dm-crypt for the system partition, Clonezilla would have to back up each and every sector of the 120 GB partition.

Minimum Downtime Considerations

The next exodus challenge was how to migrate to the 1 TB SSD with minimum downtime. As this is quite a time intensive process during which I can't use the notebook I played with several options. The first one I tried was to use Clonezilla to only copy over the 350 GB primary partition to the new SSD and then shrink it down to around 120 GB. This works quite well but it requires shrinking the partition before recreating the swap partition and then manually reinstalling the boot sector.  Reinstalling the boot sector is a bit tricky if done manually but the Boot-Repair-Disk project pretty much automates the process. The advantage of only copying one partition obviously is that it speeds things up quite a bit. In the end I chose another option when the time came and that was to use Clonezilla to make a 1:1 copy of my 500 GB SSD including all partitions to the 1 TB SSD. This saved me the hassle of recreating the boot sector and I had the time for it anyway as I ran the job over night.

Tweaking, Recreating and Encrypting Partitions On The New SSD

Once that was done I had a fully functional image on the 1 TB SSD with a working boot sector and to continue the work, I put it into another notebook. This way I could finish the migration while I was still being able to work on my main notebook. At this point, I deleted all data on my spare partition on the 1 TB SSD and also the virtual machine images on the primary partition. This left about 20 GB on the system partition. I then booted from a live Ubuntu system from a CD and used “gparted” to decrease the system partition from 350 GB down to 120 GB and to recreate a Linux swap partition right after the new and smaller system partition. Like the 1:1 Clonezilla copy process eariler, this takes quite a while. This is not a problem, however, as I could still work on the 'old' SSD and even change data there as migrating the data would only come later. Once the new drive was repartitioned I rebooted into the system on my spare notebook and used Ubuntu's “Disk Utility” to create the dm-crypt user partition in the 880 GB of remaining space on the SSD.

Auto-Mounting The Encrypted Partition and Filling It With Data

As described above it's possible to auto-mount the encrypted partition during the boot process so the partition is available before user login. As in my previous installation where I mapped the “Documents” folder and a couple of other directories to the Truecrypt volume, I removed the logical links for that and recreated new ones that pointed to empty directories on the new dm-crypt volume. And once that was done it was time to migrate all my data including the virtual machine images to the new SSD. I did this by backing up all my data to one of my cold-storage backup disks as usual and restoring it from there to the new SSD. The backup only takes a couple of minutes as LuckyBackup is pretty efficient by only coping new and altered files. To keep the downtime to a minimum I swapped the SSDs after I made the copy to the backup drives and started working with the 1 TB SSD in my production notebook. Obviously I restored the email directory and the most important virtual machine images first so I could continue working with those while the rest of the data was copied over in the background.

Thunderbird Is A Special Bird

In my Truecrypt installation I used a logical link for the mail directory so I could have it on the Truecrypt volume while the rest of the Thunderbird installation remained in the user directory. At first I thought it was only necessary to replace the local link to the mail folder but it turned out that Thunderbird also keeps the full path in its settings and doesn't care much about logical links. Fortunately the full paths can be changed in "Preferences – Mail Setup".

Summary

There we go, this is the story of my migration away from Truecrypt, upgrading to bigger SSD and cleaning up my installation at the same time. I'm glad I could try all things on a separate notebook first without Ubuntu complaining or making things difficult when it detected different hardware as other operating systems perhaps would have. Quite a number of steps ended up in trial and error sessions that would have resulted in a lot of stress if I hadn't known about them during the real migration. It's been a lot of work but it was worth it!

A 2 Amp USB Charger Is Great – If A Device Makes Use Of It

The smallest 2 ampere USB charger I've come accross so far is from Samsung and my Galaxy S4 makes almost full use of its capabilities by drawing 1.6 amperes when the battery is almost empty. In case you are wondering how I know, have a look at the measurement tool I used for measuring the power consumption of a Raspberry Pi. What I was quite surprised about, however, was that all other devices I tried it with, including a new iPhone 6, only charge at 1 ampere at most. I wondered why that is so I dug a bit deeper. Here's a summary of what I've found:

One reason for not drawing more than 1A out of the charger is that some devices simply aren't capable to charge at higher rates, no matter which charger is used. The other reason is that USB charging is only standardized up to 900 mA and everything above is proprietary. Here's how it works:

  • When a device is first connected to USB it may only draw 100 mA until it knows what kind of power source is behind the cable.
  • If it's a PC or a hub, the device can request to get more power and, if granted, may draw up to 450 mA out of us USB2 connector. And that's as much as my S4 will draw out of the USB connector of my PC.
  • USB3 connectors can supply up to 900 mA with the same mechanism.
  • Beyond the 450 mA USB2 / 900 mA USB3, the USB Charging Specification v1.1 that was published in 2007 defines two types of charging ports. The first is called Charging Downstream Port (CDP). When a device recognizes such a USB2 port it can draw up to 900 mA of power while still transferring data.
  • The second type of USB charging port defined by v1.1 of the spec is the Dedicated Charging Port (DCP). No data transfers are possible on such a port but it can deliver a current between 500 mA and 1.5A. On such a port the D+ and D- data lines are shortened over a 200 Ohm resistor so the device can find out that it's not connected to a USB data port. Further, a device recognizes how much current it can draw out of such a port by monitoring the voltage drop when current consumption is increased.
  • With v1.2 of the charging specification, published in September 2010, a Dedicated Charging Port may supply up to 5A of current.

And that's as far as the standardized solutions go. In addition there are also some Apple and Samsung proprietary solutions to indicate the maximum current their chargers can supply:

  • Apple 2.1 Ampere
  • Apple 2.4 Ampere
  • Samsung 2.4 Ampere

There we go, quite a complicated state of affairs. No wonder, only one device I have makes use of the potential of my 2A travel charger. For more information, have a look at the USB article on Wikipedia that also contains links to the specifications and the external blog posts here, here and here.