50+ Gbit/s for Vodafone Germany on New Year’s Eve

Like every year Vodafone has released numbers on mobile network usage during New Year's eve between 8 pm and 3 am as this is one of the busiest times of the year. This year, Vodafone says that 185 TB were used during those 7 hours. Let's say uplink and downlink are roughly 9:1 which would result in a total amount of 166.5 TB downloaded during that time. Divided by 7 hours, 60 minutes and 60 seconds and then multiplied by 8 to to get bits instead of bytes results in an average downlink speed at the backhaul link to the wider Internet of 53 Gbit/s. An impressive number, so a single 40 Gbit/s fiber link won't do anymore (if they only had a single site and a single backhaul interconnection provider, which is unlikely). Back in 2011/2012 the same number was 'only' 7.9 Gibt/s.

On the other hand when you compare the 53 Gbit/s for all Vodafone Germany customers to the 30 Gbit/s reached by the uplink traffic during the recent 32C3 congress or the sustained 3 Gbit/s downlink data rate to the congress Wi-Fi generated by 8.000 mobile devices, the number suddenly doesn't look that impressive anymore. Or compare that to the 5000 Gbit/s interconnect peaks at the German Internet Exchange (DE-CIX). Yes, it's a matter of perspective!

If you've come across similar numbers for other network operators please let me know, it would be interesting to compare!

Update: Secure Hotel Wi-Fi Sharing Over A Single VPN Tunnel For All Your Devices With A Raspberry Pi

WifipiBack in 2014, I came up with a project to use a Raspberry Pi as a Wifi access point in hotels and other places when I travel to connect all my devices to a single Internet connection which can either be over Wifi or over an Ethernet cable. As an added (and optional) bonus, the Raspberry Pi also acts as a VPN client and tunnels the data of all my devices to a VPN server gateway and only from there out into the wild. At the time I put my scripts and operational details on Github for easy access and made a few improvements over time. Recently I made a couple of additional improvements which became necessary as Raspbian upgraded its underlying source from Debian Wheezy to Debian Jessie.

One major change that this has brought along was that IPv6 is now active by default. For this project, IPv6 needs to be turned-off by default as most VPN services only tunnel IPv4 but happily return IPv6 addresses in DNS responses which makes traffic go around the VPN tunnel if the local network offers IPv6. For details see here and here. Another change was that the OpenVPN client is now started during the boot process by default if installed while this was not the case before and does so reliably. As a consequence I could put a couple of 'iptables' commands in the startup script to enable NATing to the OpenVPN tunnel interface straight way.

In other words, things are better than ever and v1.41 on Github now reflects those changes. Enjoy!

Moving to Linux Full Time – Dan Gillmor Writes About His Experiences

Microsoft Windows 10 behaves like a spy under your fingertips these days, Apple gives you less and less freedom on its desktop OS, so there's never been a better time to regain privacy and freedom by switching to Linux on the desktop. Over the years I wrote many articles on this blog about my Linux delights but I haven't seen a better summary of why switching to Linux on the desktop full time is so great than Dan Gillmor's recent article on the topic. Highly recommended!

My First IPv6 “First Call” To My Own Services At Home

A date to remember for me: On 15th January 2016 I contacted my web server at home running Owncloud and Selfoss for the first time over IPv6. From an end user's point of view no difference is visible at all but from a technical point of view it's a great "first time" for me, made even sweeter by the fact that my PC was not connected to another IPv6-enabled fixed line but connected via tethering to a smartphone with dual-stack IPv4v6 cellular connectivity.

The Thing With Dynamic IPv6 Addresses for Servers

And it's been a bit of a struggle to put together, this IPv6 stuff is not as straight forward as I hoped it would be. For a crash course I wrote back in 2009 have a look, here, here, here and here. The major challenge that I had to overcome for this to happen is to find a dynamic DNS service that can handle not only dynamic IPv4 addresses but also dynamic IPv6. Noip.com, where I host my domain and where I use the dynamic DNS service can handle IPv6 addresses for my domain but only static entries. The response to a support question how to do dynamic IPv6 addresses with them resulted in the little informative answer that they are working on it but no date has been announced by when this will be available. Hm, looking at their record, they seem to be working on IPv6 already since 2011 so I won't get my hopes up that this will happen soon. Is it really that difficult? Shame on you!

O.k., another dynamic DNS service I use is afraid.org and they do offer dynamic DNS with IPv6. Unfortunately, they have a DNS entry time to live (TTL) for IPv6 of 3600 seconds, i.e. 1 hour. This is much too long for my purposes as my IPv6 prefix changes once a day and any change must be propagated as quickly as possible and not only after an hour in the worst case. They offer a lower TTL with a paid account, but their idea and my idea of how much this may cost are too far apart. I've found a couple of other dynamic IPv6 services but they were also not suitable for me because they also had TTLs that were too long for my purpose.

One option I found that didn't have this restriction is dynv6.com. Their service is free and they do offer IPv4 and IPv6 dynamic DNS with a TTL of 1 minute but only for their own domain. Not an option for me either, I want to be reachable via my own domain. Kind of a deadlock situation…

But here's how I finally go it to work: The Domain Name System has a forwarding mechanism, the "Canonical Name Record" (CNAME). By using this mechanism, I can forward DNS queries for my domain that is hosted at noip.com (let's say it's called www.martin.com) to my subdomain at dynv6.com (let's say my domain there is called martin.dynv6.com). So instead of updating the DNS entry for www.martin.com when my IPv6 address changes once a day I can now update martin.dynv6.com which has a TTL of 1 minute while the CNAME forwarding at noip.com from www.martin.com to martin.dynv6.com in the DNS system is static and remains unchanged.

As a result the web page name in the browser remains "www.martin.com" but I can use my dynamic IPv6 record at dynv6.com where customer specific domains are not offered. Not ideal but it will do until NO-IP.com gets their act together.

LTE-A Pro for Public Safety Services – Part 2 – Advantages over PMR in 2G

LTE for Public Safety Services, also referred to as Private Mobile Radio (PMR) is making progress in the standards and in the first part of this series I’ve taken a first general look. Since then I thought a bit about which advantages a PMR implementation might offer over current 2G Tetra and GSM PMR implementations and came up with the following list:

Voice and Data On The Same Network: A major feature 2G PMR networks are missing today is broadband data transfer capabilities. LTE can fix this issue easily as even bandwidth intensive applications safety organizations have today can be served. Video backhauling is perhaps the most demanding broadband feature but there are countless other applications for PMR users that will benefit from having an IP based data channel such as for example number plate checking and identity validation of persons, access to police databases, maps, confidential building layouts, etc. etc.

Clear Split into Network and Services: To a certain extent, PMR functionality is independent of the underlying infrastructure. E.g. the group call and push to talk (PTT) functionality is handled by the IP Multimedia Subsystem (IMS) that is mostly independent from the radio and core transport network.

Separation of Services for Commercial Customers and PMR Users: One option to deploy a public safety network is to share resources with an already existing commercial LTE network and upgrade the software in the access and core network for public safety use. More about those upgrades in a future post. The specific point I want to make here is that the IP Multimedia Subsystem (IMS) infrastructure for commercial customers and their VoLTE voice service can be completely independent from the IMS infrastructure used for the Public Safety Services. This way, the two parts can evolve independently from each other which is important as Public Safety networks typically evolve much slower or and in fewer steps compared to commercial services as there is no competitive pressure to evolve things quickly.

Apps vs. Deep Integration on Mobile Devices: On mobile devices, PMR functionality could be delivered as apps rather than built into the operating system of the devices. This allows to update the operating system and apps independently and even to use the PMR apps on new devices.

Separation of Mobile Hardware and Software Manufacturers: By having over-the-top PMR apps it’s possible to separate the hardware manufacturer from the provider of the PMR functionality except for a few interfaces which are required such as setting up QoS for a bearer (already used for VoLTE today, so that’s already taken care of) or the use of eMBMS for a group call multicast downlink data flow. In contrast, current 2G group call implementations for GSM-R require deep integration into the radio chipset as pressing the talk button required DTAP messages to be exchanged between the mobile device and the Mobile Switching Center (MSC) which are sent in a control channel for which certain timeslots in the up- and downlink of a speech channel were reserved. Requesting the uplink in LTE PMR requires interaction with the PMR Application Server but this would be over an IP channel which is completely independent from the radio stack, it’s just a message contained in an IP packet.

Device to Device Communication Standardized: The LTE-A Pro specification contains mechanisms to extend the network beyond the existing infrastructure for direct D2D communication, even in groups. This was lacking in the 2G GSM-R PMR specification. There were attempts by at least one company to add such a “direct” mode to the GSM-R specifications at the time but there were too many hurdles to overcome at time time, including questions around which spectrum to use for such a direct mode. As a consequence these attempts were not leading to commercial products in the end.

PMR not left behind in 5G: LTE as we know it today is not likely to be replaced anytime soon by a new technology. This is a big difference to PMR in 2G (GSM-R) which was built on a technology that was already set to be superseded by UMTS. Due to the long timeframes involved, nobody seriously considered upgrading UMTS with the functionalities required for PMR as by the time UMTS was up and running, GSM-R was still struggling to be accepted by its users. Even though 5G is discussed today, it seems clear that LTE will remain a cornerstone for 5G as well in a cellular context.

PMR On The IP Layer and Not Part of The Radio Stack (for the most part): PMR services are based on the IP protocol with a few interfaces to the network for multicast and quality of services. While LTE might gradually be exchanged for something faster or new radio transmission technologies might be put alongside it in 5G that are also interesting for PMR, the PMR application layer can remain the same. This is again unlike in 2G (GSM-R) where the network and the applications such as group calls were a monolithic block and thus no evolution was possible as the air interface and even the core network did not evolve but were replaced by something entirely new.

Only Limited Radio Knowledge Required By Software Developers: No deep and specific radio layer knowledge is required anymore to implement PMR services such as group calling and push to talk on mobile devices. This allows software development to be done outside the realm of classic device manufacturer companies and the select few software developers that know how things work in the radio protocol stack.

Upgradeable Devices In The Field: Software upgrades of devices has become a lot easier. 2G GSM-R devices and perhaps also Tetra devices can’t be upgraded over the air which makes it very difficult to add new functionality or to fix security issues in these devices. Current devices which would be the basis for LTE-A Pro PMR devices can be easily upgraded over the air as they are much more powerful and because there is a broadband network that can be used for pushing the software updates.

Distribution of Encryption Keys for Group Calls: This could be done over an encrypted channel to the Group Call Server. I haven’t dug into the specification details yet to find out if or how this is done but it is certainly possible without too much additional work. That was not possible in GSM-R, group calls were (and still are) unencrypted. Sure, keys could be distributed over GPRS to individual participants but the service for such a distribution was never specified.

Network Coverage In Remote Places: PMR users might want to have LTE in places that are not normally covered by network operators because it is not economical. If they pay for the extra coverage and in case the network is shared this could have a positive effect when sharing a network for both consumer and PMR services. However, there are quite a number of problems with network sharing one has to be careful when proposing this. Another option, which has also been specified, is to extend network coverage by using relays, e.g. installed in cars.

I was quite amazed how long this list of pros has become. Unfortunately my list of issues existing in 2G PMR implementations today that a 4G PMR system still won’t be able to fix is equally long. More about this in part 3 of this series.

Owncloud Must Think “WordPress” When It Comes To Updating

There is two extremes in the popular cloud space when it comes to ease of updating: WordPress and Owncloud…

On one side is WordPress which has about the most simple and most reliable update process that is fully automatic and doesn't even have to be triggered by the administrator for small upgrades and a simple click when going from one major release to the next. It hasn't failed me once in the past five years. And then there is Owncloud, which is the exact opposite.

Over the past year it failed me during each and every update with obscure error messages even for small security upgrades, broken installations and last resort actions such as deleting app directories and simply ignoring some warnings and to move ahead despite of them. If you think it can't be that bad, here's my account of one such update session last year. In the meantime I've become so frustrated and cautious as to clone my live Owncloud system and first try the update on a copy I can throw away. Only once I've found out how to run the upgrade process, which unfortunately changes every now and then as well, which things break and how to fix them do I run an upgrade on my production system. But perhaps there is some hope in sight?

My last upgrade a couple of days ago worked flawlessly, apart from the fact that the update process has changed again and it's now mandatory to finalize the upgrade process from a console. But at least it didn't fail. I was about to troll about the topic again but this morning I saw a blog post over at the Owncloud blog in which they finally admit in public that their upgrade process leaves a lot to be desired and that they have started to implement a lot of things to make it more robust and easier to understand. If you have trouble updating Owncloud as well I recommend to read the post, it might make you feel a bit better and give you some hope for the next update process.

And to the Owncloud developers I would recommend to go a bit beyond what they have envisaged so far: Blinking lights, more robustness and more information of what is going on during an update is a nice thing and will certainly improve the situation. In the end, however, I want an update process that is just like WordPress'es: You wake up in the morning and have an email in your inbox from your WordPress installation that tells you that it has just updated itself, that all is well and that you don't have to do anything anymore! That's how it should be!

LTE-A Pro for Public Safety Services – Part 1

In October 2015, 3GPP has decided to refer to LTE Release 13 and beyond as LTE-Advanced Pro to point out that LTE specifications have been enhanced to address new markets with special requirements such as Public Safety Services. This has been quite long in the making because a number of functionalities were required that go beyond just delivery of IP packets from point A to point B. A Nokia paper published at the end of 2014 gives a good introduction to the features required by Public Safety Services such as the police, fire departments and medical emergency services:

  • Group Communication and Push To Talk features (referred to as "Mission Critical Push To Talk" (MCPPT) in the specs, perhaps for the dramatic effect or to perhaps to distinguish them from previous specifications on the topic).
  • Priority and Quality of Service.
  • Device to Device communication and relaying of communication when the network is not available.
  • Local communication when the backhaul link of an LTE base station is not working but the base station itself is still operational.

Group Communication and Mission Critical Push to Talk have been specified as IP Multimedia Subsystem (IMS) services just like Voice over LTE (VoLTE) that is being introduced in commercial LTE networks these days and can use the eMBMS (evolved Mobile Broadcast Multicast Service) extension in case many group participants are present in the same cell to only send a voice stream in the downlink once instead of separately to each individual device.

In a previous job I've worked on the GSM group call and push to talk service and other safety related features for railways for a number of years so all of this sounds very familiar. In fact I haven't come across a single topic that wasn't already discussed at that time for GSM and most of them were implemented and are being used by railway companies across Europe and Asia today. While the services are pretty similar, the GSM implementation is, as you can probably imagine, quite different from what has now been specified for LTE.

There is lots to discover in the LTE-A Pro specifications on these topics and I will go into more details both from a theoretical and practical point of view in a couple of follow up posts.

IPv6 At Home And Away Now

In the second half of last year, my mobile network operator of choice has introduced IPv4/v6 dual-stack functionality and since then I've been enjoying IPv6 on my mobile device while away from home. Not that I would notice as a normal user as all services I use can still be reached over IPv4, but as a tech-geek, you know… For me this was a bit ironic as I always assumed that I would have IPv6 on my DSL connection long before I use it on my mobile devices in the cellular network. And I could have, to be honest, but I just didn't want to update my fixed line connection at home to "all-IP" as it's a critical link and I don't change critical infrastructure just like that if not really necessary. Anyway, back in December 2015 I had to switch my DSL line to "all-IP" because my network operator politely forced me to and apart from a number of other sweet things the new package included native IPv6 connectivity if I wanted to.

As I was traveling a lot in December I decided to keep IPv6 off for the time being and start experimenting with it once back home for more than just a couple of hours. So this week I finally go around to switching IPv6 on and just let it ran for a while without any other modifications to make sure my servers are not negatively impacted. So far, things have run smoothly except for one thing I was expecting. After switching on IPv6, my devices immediately found the public IPv6 prefix and assigned public IPv6 addresses to themselves. The servers did so as well, including my Raspberry Pis, a nice side effect of having upgraded them from Raspbian based on Debian Wheezy to Raspbian based on Debian Jessie last year. That will make things a bit easier to make them reachable not only via IPv4 but also via IPv6 from the Internet later. The one thing I was actually expecting to break is that for some services I use VPN connections to overcome geo-blocking. As my external VPN service provider does not support IPv6 but happily returns IPv6 addresses to DNS queries I had to disable IPv6 on that machine.

Speaking of inbound IPv6 to my servers that's going to be an interesting thing to get working. So far I see two issues that have to be addressed:

  • Today I run several servers behind the same IPv4 address and domain name. With IPv6 they will have different IP addresses so using the same domain name is going to be a challenge.
  • My Dynamic-DNS provider does support IPv6 AAAA records but not updating IPv6 records dynamically other than over the web interface. Quite a shame in 2016…

Two fun things to figure out in 2016…

Wi-Fi WPA-Professional with Certificate Authentication

Wifi-ttls-1Today, most Wi-Fi hotspots at home use the standard WPA/WPA2 authentication and encryption mechanism that uses a shared password between the Wi-Fi hotspot and clients. The downside of this approach is that all users have to share the same password which enables an attacker who is in range of the network and in possession of the password to decode encrypted packets if he has observed the initial EAPOL authentication and ciphering dialog of another client. Another downside is that the password needs to be stored in all access points of a Wi-Fi network. All these things are clearly not acceptable in company environments or during public events that want to offer air interface security. For such environments, the Wi-Fi alliance has specified the WPA-Professional authentication approach that can use various authentication methods with certificates and individual passwords for each user. Let's have a closer look at one option of which I was recently able to take a Wireshark trace:

To address the need of companies for a centralized user management, WPA/WPA2-Professional manages certificates and passwords from a central authentication server, often referred to as a RADIUS server. In practice it's not straight forward to discuss such setups because they are mostly used by companies and hence can't be discussed in public. Fortunately I've now found one network that uses WPA2-Professional with a certificate and passwords that can be discussed in public: The Wi-Fi network that was used during 32C3.

As they've described on their Wiki, a server side certificate was used to authenticate the network towards the user via TTLS. To authenticate clients, a username/password of choice could be used in the network. As the conference network administrators were not interested to authenticate users, any username and password combination was accepted. In practice users could remain anonymous this way while at the same time an individual secret was used to generate cipher keys, i.e. packets can't be deciphered by an attacker even if the authentication packets were captured.

The screenshot of the Wireshark trace on the left (here's the pcap in case you want to have a more detailed look) shows how the TTLS/Certificate authentication works in practice. After associating with the network, the Wi-Fi access point asks for a username, which can be anonymous and then tells the user that it wants to proceed with a TTLS-EAP authentication procedure. The client device then answers with a 'Client Hello' packet that contains all cipher suites it supports. The network then selects a cipher suite and sends it's signed certificate to authenticate itself which contains it's public key.

Wifi-ttls-2In company environments, the certificate used in Wi-Fi networks is usually signed by a private certificate authority. To enable the device to validate the signed certificate that was sent, the public key of the certificate authority that has signed the delivered certificate has to be stored in the device before it connects to the network for the first time.

In case of the 32C3 network a public certification authority was used to sign the certificate delivered to the device. As anyone can get a signature for a certificate from a public certification authority if he is the owner of the domain specified in the certificate an additional client side configuration is required to ensure that only signed certificates with the correct domain name of the Wi-Fi network are accepted. Unfortunately, Ubuntu's graphical network configuration tool doesn't have a field to configure this extra information as shown in the second screenshot.

Fortunately it's possible to modify Ubuntu's configuration file for the network after it has been created in '/etc/NetworkManager/system-connections' by adding the 'altsubject' line in the 802.1x section with the domain name used by the Wi-Fi network's certificate.

[802-1x]
eap=ttls;
identity=x
ca-cert=/etc/ssl/certs/StartCom_Certification_Authority.pem
altsubject-matches=DNS:radius.c3noc.net;
phase2-auth=pap
password-flags=1

Putting in a wrong value in this line makes the connection establishment fail so I could verify that the overall authentication process is secure.

Once the client device has accepted the server certificate (packet 14 in the trace) an encrypted handshake message is exchanged that is client specific. For this dialog, the client uses the public key that was part of the certificate to encrypt the message. Decoding the packets on the network side is only possible with the private key. As the private key is never sent over the air an attacker can't use a copy of the certificate for a rogue access point.

Afterward the standard 4 step EAPOL Wi-Fi messaging is used to activate link level wireless encryption, based on an individual secret exchanged during the TTLS process. Packet 22 shows the first encrypted packet exchanged between the access point and the client device, a DHCP message to get an IP address. As the trace was done on the client device the decoded version of the packet is shown. Once the IP address has been received the connection is fully established and user data packets can be exchanged.

Book Review: Pioneer Programmer

If you have some background in computer science you’ve probably come across the term “von Neumann Architecture” before. The term goes back to the brilliant mathematician John von Neumann who, for the first time in 1945, described the computer architecture we still use today with an arithmetic logic unit, a control unit, registers and combined program and data memory in a seminal paper on the EDVAC. As pointed out in the Wikipedia article there is quite some controversy about this paper as it was only intended as a first internal draft for review and only bears van Neumann’s name but not those of the main inventors of the concepts, John Mauchly and Presper Eckert. While intended as an internal paper it was still distributed to a larger community and thus it had the appearance that van Neumann had come with the ideas all by himself. While attempts were made to set the record straight, the term “von Neumann architecture” stuck and has remained in place up to the present day.

There is a lot of controversy about the reasons, motivation and character of Herman Goldstine to distribute the paper without consent. “Pioneer Programmer” the autobiography of Jean Jennings Bartik edited by Jon T. Kickmann and Kim D. Todd has a lot of background information on this and many other topics of the early days of computing in the United States from her point of view. Jean was a member of the initial team of programmers of the ENIAC, the first fully electronic computer in the mid-1940s and could thus witness this and many other events first hand and decided to set a number of things straight with her autobiography. Pretty much forgotten until many decades later, the first ENIAC programmer team consisted solely of female mathematicians as due to the war there was a shortage of male mathematicians and the boys were more interested in building the computing machines than to program them. Pioneer Programmer intends not to only set the record straight but also to tell the story of how women shaped early computing and to describe the difficulties they had in a male dominated scientific world in the US and Europe during that time and the decades afterward. A fascinating story that starts with her childhood on a farm in rural America and ends with the jobs and positive as well as negative experiences she had in the computing industry as a woman in the decades after leaving the ENIAC behind.

Probably not a very well known book but for those who are interested in the facts behind the stories of early computing a must read that I’ve very much enjoyed reading!