LTE-A Pro for Public Safety Services – Part 3 – The Challenges

In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I’ve described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal. On the other hand there is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won’t be a silver bullet for the future of PMR services:

Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It’s unlikely this will significantly change in the future.

Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn’t stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don’t like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes ‘end of life’ and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.

Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.

Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.

Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.

ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.

Network Operation Models

As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here’s a comparisons:

A Dedicated PMR Network

  • Nationwide network coverage requires a significant number of base stations and it might be difficult to find enough and suitable sites for the base stations. In many cases, base station sites can be shared with commercial network operators but often enough, masts are already used by equipment of several network operators and there is no more space for dedicated PMR infrastructure.
  • From a monetary point of view it is probably much more expensive to run a dedicated PMR network than to use the infrastructure of a commercial network. Also, initial deployment is much slower as no equipment that is already installed can be reused.
  • Dedicated PMR networks would likely require dedicated spectrum as commercial networks would probably not give back any spectrum they own so PMR networks could use the same bands to make their devices cheaper. This in turn would mean that devices would have to support a dedicated frequency band which would make them more expensive. From what I can tell this is what has been chosen in the US with LTE band 14 for exclusive use by a PMR network. LTE band 14 is adjacent to LTE band 13 but still, devices supporting that band might need special filters and RF front-ends to support that frequency range.

A Commercial Network Is Enhanced For PMR

  • High Network Quality Requirements: PMR networks require good network coverage, high capacity and high availability. Also due to security concerns and fast turn-around time requirements when a network problem occurs, local network management is a must. This is typically only done anymore by high quality networks rather than networks that focus on budget rather than quality.
  • Challenges When Upgrading The Network: High quality network operators are also keen to introduce new features to stay competitive (e.g. higher carrier aggregation, traffic management, new algorithms in the network) which is likely to be hindered significantly in case the contract with the PMR user requires the network operator to seek consent before doing network upgrades.
  • Dragging PMR Along For Its Own Good: Looking at it from a different point of view it might be beneficial for PMR users to be piggybacked onto a commercial network as this ‘forces’ them through continuous hardware and software updates for their own good. The question is how much drag PMR inflicts on the commercial network and if it can remain competitive when slowed down by PMR quality, stability and maturity requirements. One thing that might help is that PMR applications could and should run on their own IMS core and that there are relatively few dependencies down into the network stack. This could allow commercial networks to evolve as required due to competition and advancement in technology while evolving PMR applications on dedicated and independent core network equipment. Any commercial network operator seriously considering taking on PMR organizations should seriously investigate this impact on their network evolution and assess if the additional income to host this service is worth it.

So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.

50+ Gbit/s for Vodafone Germany on New Year’s Eve

Like every year Vodafone has released numbers on mobile network usage during New Year's eve between 8 pm and 3 am as this is one of the busiest times of the year. This year, Vodafone says that 185 TB were used during those 7 hours. Let's say uplink and downlink are roughly 9:1 which would result in a total amount of 166.5 TB downloaded during that time. Divided by 7 hours, 60 minutes and 60 seconds and then multiplied by 8 to to get bits instead of bytes results in an average downlink speed at the backhaul link to the wider Internet of 53 Gbit/s. An impressive number, so a single 40 Gbit/s fiber link won't do anymore (if they only had a single site and a single backhaul interconnection provider, which is unlikely). Back in 2011/2012 the same number was 'only' 7.9 Gibt/s.

On the other hand when you compare the 53 Gbit/s for all Vodafone Germany customers to the 30 Gbit/s reached by the uplink traffic during the recent 32C3 congress or the sustained 3 Gbit/s downlink data rate to the congress Wi-Fi generated by 8.000 mobile devices, the number suddenly doesn't look that impressive anymore. Or compare that to the 5000 Gbit/s interconnect peaks at the German Internet Exchange (DE-CIX). Yes, it's a matter of perspective!

If you've come across similar numbers for other network operators please let me know, it would be interesting to compare!

Update: Secure Hotel Wi-Fi Sharing Over A Single VPN Tunnel For All Your Devices With A Raspberry Pi

WifipiBack in 2014, I came up with a project to use a Raspberry Pi as a Wifi access point in hotels and other places when I travel to connect all my devices to a single Internet connection which can either be over Wifi or over an Ethernet cable. As an added (and optional) bonus, the Raspberry Pi also acts as a VPN client and tunnels the data of all my devices to a VPN server gateway and only from there out into the wild. At the time I put my scripts and operational details on Github for easy access and made a few improvements over time. Recently I made a couple of additional improvements which became necessary as Raspbian upgraded its underlying source from Debian Wheezy to Debian Jessie.

One major change that this has brought along was that IPv6 is now active by default. For this project, IPv6 needs to be turned-off by default as most VPN services only tunnel IPv4 but happily return IPv6 addresses in DNS responses which makes traffic go around the VPN tunnel if the local network offers IPv6. For details see here and here. Another change was that the OpenVPN client is now started during the boot process by default if installed while this was not the case before and does so reliably. As a consequence I could put a couple of 'iptables' commands in the startup script to enable NATing to the OpenVPN tunnel interface straight way.

In other words, things are better than ever and v1.41 on Github now reflects those changes. Enjoy!

Moving to Linux Full Time – Dan Gillmor Writes About His Experiences

Microsoft Windows 10 behaves like a spy under your fingertips these days, Apple gives you less and less freedom on its desktop OS, so there's never been a better time to regain privacy and freedom by switching to Linux on the desktop. Over the years I wrote many articles on this blog about my Linux delights but I haven't seen a better summary of why switching to Linux on the desktop full time is so great than Dan Gillmor's recent article on the topic. Highly recommended!

Owncloud Must Think “WordPress” When It Comes To Updating

There is two extremes in the popular cloud space when it comes to ease of updating: WordPress and Owncloud…

On one side is WordPress which has about the most simple and most reliable update process that is fully automatic and doesn't even have to be triggered by the administrator for small upgrades and a simple click when going from one major release to the next. It hasn't failed me once in the past five years. And then there is Owncloud, which is the exact opposite.

Over the past year it failed me during each and every update with obscure error messages even for small security upgrades, broken installations and last resort actions such as deleting app directories and simply ignoring some warnings and to move ahead despite of them. If you think it can't be that bad, here's my account of one such update session last year. In the meantime I've become so frustrated and cautious as to clone my live Owncloud system and first try the update on a copy I can throw away. Only once I've found out how to run the upgrade process, which unfortunately changes every now and then as well, which things break and how to fix them do I run an upgrade on my production system. But perhaps there is some hope in sight?

My last upgrade a couple of days ago worked flawlessly, apart from the fact that the update process has changed again and it's now mandatory to finalize the upgrade process from a console. But at least it didn't fail. I was about to troll about the topic again but this morning I saw a blog post over at the Owncloud blog in which they finally admit in public that their upgrade process leaves a lot to be desired and that they have started to implement a lot of things to make it more robust and easier to understand. If you have trouble updating Owncloud as well I recommend to read the post, it might make you feel a bit better and give you some hope for the next update process.

And to the Owncloud developers I would recommend to go a bit beyond what they have envisaged so far: Blinking lights, more robustness and more information of what is going on during an update is a nice thing and will certainly improve the situation. In the end, however, I want an update process that is just like WordPress'es: You wake up in the morning and have an email in your inbox from your WordPress installation that tells you that it has just updated itself, that all is well and that you don't have to do anything anymore! That's how it should be!

Wi-Fi WPA-Professional with Certificate Authentication

Wifi-ttls-1Today, most Wi-Fi hotspots at home use the standard WPA/WPA2 authentication and encryption mechanism that uses a shared password between the Wi-Fi hotspot and clients. The downside of this approach is that all users have to share the same password which enables an attacker who is in range of the network and in possession of the password to decode encrypted packets if he has observed the initial EAPOL authentication and ciphering dialog of another client. Another downside is that the password needs to be stored in all access points of a Wi-Fi network. All these things are clearly not acceptable in company environments or during public events that want to offer air interface security. For such environments, the Wi-Fi alliance has specified the WPA-Professional authentication approach that can use various authentication methods with certificates and individual passwords for each user. Let's have a closer look at one option of which I was recently able to take a Wireshark trace:

To address the need of companies for a centralized user management, WPA/WPA2-Professional manages certificates and passwords from a central authentication server, often referred to as a RADIUS server. In practice it's not straight forward to discuss such setups because they are mostly used by companies and hence can't be discussed in public. Fortunately I've now found one network that uses WPA2-Professional with a certificate and passwords that can be discussed in public: The Wi-Fi network that was used during 32C3.

As they've described on their Wiki, a server side certificate was used to authenticate the network towards the user via TTLS. To authenticate clients, a username/password of choice could be used in the network. As the conference network administrators were not interested to authenticate users, any username and password combination was accepted. In practice users could remain anonymous this way while at the same time an individual secret was used to generate cipher keys, i.e. packets can't be deciphered by an attacker even if the authentication packets were captured.

The screenshot of the Wireshark trace on the left (here's the pcap in case you want to have a more detailed look) shows how the TTLS/Certificate authentication works in practice. After associating with the network, the Wi-Fi access point asks for a username, which can be anonymous and then tells the user that it wants to proceed with a TTLS-EAP authentication procedure. The client device then answers with a 'Client Hello' packet that contains all cipher suites it supports. The network then selects a cipher suite and sends it's signed certificate to authenticate itself which contains it's public key.

Wifi-ttls-2In company environments, the certificate used in Wi-Fi networks is usually signed by a private certificate authority. To enable the device to validate the signed certificate that was sent, the public key of the certificate authority that has signed the delivered certificate has to be stored in the device before it connects to the network for the first time.

In case of the 32C3 network a public certification authority was used to sign the certificate delivered to the device. As anyone can get a signature for a certificate from a public certification authority if he is the owner of the domain specified in the certificate an additional client side configuration is required to ensure that only signed certificates with the correct domain name of the Wi-Fi network are accepted. Unfortunately, Ubuntu's graphical network configuration tool doesn't have a field to configure this extra information as shown in the second screenshot.

Fortunately it's possible to modify Ubuntu's configuration file for the network after it has been created in '/etc/NetworkManager/system-connections' by adding the 'altsubject' line in the 802.1x section with the domain name used by the Wi-Fi network's certificate.

[802-1x]
eap=ttls;
identity=x
ca-cert=/etc/ssl/certs/StartCom_Certification_Authority.pem
altsubject-matches=DNS:radius.c3noc.net;
phase2-auth=pap
password-flags=1

Putting in a wrong value in this line makes the connection establishment fail so I could verify that the overall authentication process is secure.

Once the client device has accepted the server certificate (packet 14 in the trace) an encrypted handshake message is exchanged that is client specific. For this dialog, the client uses the public key that was part of the certificate to encrypt the message. Decoding the packets on the network side is only possible with the private key. As the private key is never sent over the air an attacker can't use a copy of the certificate for a rogue access point.

Afterward the standard 4 step EAPOL Wi-Fi messaging is used to activate link level wireless encryption, based on an individual secret exchanged during the TTLS process. Packet 22 shows the first encrypted packet exchanged between the access point and the client device, a DHCP message to get an IP address. As the trace was done on the client device the decoded version of the packet is shown. Once the IP address has been received the connection is fully established and user data packets can be exchanged.

A Flatrate For Calling All EU Fixed Lines And Mobiles From Anywhere In The EU

I travel a lot in Europe and I call lots of people in different countries not only when I'm in my home country but also while traveling. In other words, without a good tariff for international calling and data use, it's no fun. A new mobile EU-Flat tariff introduced by my mobile network operator last year has, for the first time, enabled me to use my monthly mobile data volume anywhere in the EU and to call back home without per minute fees for a modest additional monthly fee of 5 euros. This has helped a lot and apart from one exception I didn't have to use local SIM cards anymore.

One thing that the offer didn't include, however, was making calls from my home country or while traveling in the EU to fixed and mobile phones of other EU countries. At prices of well over a euro a minute this had remained prohibitively expensive. But things have moved on since then and my fixed and mobile network operator of choice has made me a bundle offer to extend my current EU-flat to also include voice calls from anywhere in the EU to anywhere in the EU to both fixed and mobile devices. Yes, that's what I've been waiting for for some many years. The 'everything inclusive anywhere EU-Flat' now costs an extra 10 euros on top of my normal fixed line and mobile contracts instead of the 5 euros I paid before but with my usage pattern that's a deal I was more than willing to take.

No longer waiting to make some calls only until I'm back home or in the office or by using complicated dial-in numbers, no nightmares about costs spiraling out of control because that conference call only offers an international dial-in number, it's definitely my tariff add-on of the year!

P.S.: No, this is not an advertisement for a particular fixed and mobile network operator which is why I haven't named the company in the first place. This blog entry is about documenting a very positive change in the telecommunication market and to encourage other network operators to follow.

32C3 – Congress Infrastructure Review And A Plea for GSM ARFCNs for 2016

Like ever year at the end of the Congress one of the last sessions is the Infrastructure Review. Here, the people who built the congress data network, the DECT network, the GSM network and the Seidenstraße talk about the technology they used this year. It always interesting to hear how much data, calls and packets have been shuffled through the networks and how that compares to last year, how many people used the network, how many wireless devices were used, which networking equipment was used and so on and so on. For anyone interested in networks this talk is a must see and fun to watch not only because of the interesting numbers but also because the presentation contains a lot of hacker fun and sarcasm. This year was certainly no exception. The cut video of the session is not yet on the congress streaming server but the raw-uncut version can be found here in the meantime.

One important message I also want to repeat here: So far, the GSM network of the Congress used the 1800 MHz DECT/GSM guard band. This won't be possible in 2016 anymore as that part of the spectrum has been auctioned in 2015 so one of the network operators in Germany has to be kind enough to loan the Congress GSM network organizers a couple of ARFCNs for the week. So if you are working in spectrum planning at a network operator and think you can spare a couple of channels for a week at the Congress location please think about it and get in touch with the organizers. If you don't know how to do that, let me know, I'll be glad to help!

32C3 – Vehicle2Vehicle Communication with IEEE 802.11p

One feature some proponents are pushing for future 5G networks are ultra short reaction times for ultra critical communication, for example between cars. What I failed to understand so far in this discussion was why for car to car communication a fixed network infrastructure and a backhaul network was necessary!? After all, car to car communication mainly makes sense for cars that are in close proximity to exchange information about potential dangers such as emergency braking, breakdowns and their current status such speed, direction etc., etc. It seems that my skepticism was not unfounded because unknown to me and perhaps also to the 1 ms 5G proponents, decentralized solutions not requiring a network infrastructure already exist.

While Europe and the US seem to be on different paths (once again) on higher layers of the protocol stack, both approaches are based on the IEEE 802.11p extension of the Wi-Fi standard. In this "Wi-Fi" flavor, there are no central access points, no fixed equipment and no backhaul of any kind. On top of this physical layer, event and context information is exchanged. An interesting challenge is how to ensure that messages are sent from "real" vehicles and not from rouge devices that want to disrupt traffic, e.g. by sending messages about emergency breaking etc. while at the same time ensuring privacy, i.e. to send messages anonymously to prevent tracking.

The concept that car companies have come up with is a public key infrastructure and cars equipped with a master certificate by car manufacturer. Based on the master certificate, temporary certificates are signed by a certificate authority which are then included in 802.11p messages sent by cars. Vehicles receiving messages can then validate the message by checking the temporary certificate which does not contain the car's identity and which are changed frequently. Rouge devices that do not have a master certificate can't get temporary certificates, at least in theory, and therefore can't include proper temporary certificates in their messages. That makes me wonder of course how hard it might be in the future to get a valid certificate by extracting it from an on-board computer of a vehicle. SIM cards of mobile devices have provided pretty good security over the past decades so there is at least some hope that the master certificates can be stored safely.

For more details, here's the talk on this topic from 32C3.

32C3 – Approaching 20 Gbit/s of Outbound Traffic

Approaching-20-gbit12.000 people have bought tickets for the 32C3 and while they consumer a lot of data from the Internet it's easily dwarfed by the amount of data that is flowing from the congress to the outside world. How much of that are the live and recorded video streams is hard to tell but I guess it might be a fair amount. The screenshot on the left shows the state of affairs at 6 pm on day 2 of the congress:

  • Outgoing traffic keeps growing and now approaches 20 Gbit/s.
  • Incoming traffic is at 5.6 Gbit/s, 3 Gbit/s flowing to 7800 wireless devices over 146 Wi-Fi access points.

Amazing numbers and the 5 GHz Wifi is working just great for me! Only once did I have to fall back to the LTE network coverage so far which was during the intro session and key note in the huge conference hall during day 1 which can hold 3000 people. There are at least 8 Wifi access points in there which handle the load just fine at other times.