Latency Comparison – DSL, UMTS, LTE and Fiber

Speaking of the 1 ms 5G latency myth in my previous post on the topic let's have a look at what round trip times are to servers on the Internet today over different access networks – with surprising results!

(V)DSL: When I ping a sever pretty close to my location and within the network of my access provider I get ping round trip delay times over my VDSL connection of around 24 milliseconds. Since that's what I have at home it's my reference and quite far away from the mythical 1 millisecond 5G latency. About 3-5 ms are spent on the Wi-Fi link, the rest are a result from delays in the fixed access network. Delay after that to the server is minimal, in the order of a millisecond or two.

UMTS: To the same server my round trip delay times are in the order of 100 milliseconds, so quite a bit more.

LTE: Here, I get round trip times of around 45 milliseconds to that server, so quite a bit of improvement over the UMTS network

Fiber: In Paris I have a Fiber to the Home (FTTH) GPON Link. From a server connected over Ethernet to a router which then connects to a fiber/Ethernet converter I get round trip times to a server close to the edge of that network in the order of 3-4 milliseconds. That is quite a bit closer to the mythical 1 millisecond 5G delay time already. I then pinged a server around 600 km away in Germany and got round trip times of 15 milliseconds. Out of those, 6 milliseconds is due to the physical delay of the light in the fibers, the rest is processing delay.

There we go, I was quite surprised about the phenomenal delay performance of the fiber connection, it's not far away from the physical limits. The question now is if and how this efficiency can be ported to wireless as well, when even VDSL has far more delay, even though it's a fixed line technology.

The 1 Millisecond 5G Myth

5G must be on the steep rise part of the Gartner Hype Cycle curve as I have heard a lot of non-technical people making a lot of technical statements out of context apart from the usual Mbit/s peak data rate claims. A prime example is the 1 millisecond round trip time that 5G should/will have to enable the 'tactile' Internet, i.e. Internet connectivity that is used to remotely interact with the physical world.

Sounds all nice but physics stands a bit in the way of this and nobody seems to say so. The speed of light and electricity is limited and in one millisecond, light can only travel around 200 km through an optical cable. So even if network equipment does not add any latency whatsoever, the maximum round trip distance is 100 km. In other words, there's no way to remotely control a robot with a latency of 1 ms in one part of the world from a place halfway around the world. But then, why let physics stop you?

So perhaps what was really meant is to further reduce the latency of network components? A big step was done in LTE with an air interface that has 1 ms slices and to base all network interfaces on the IP protocol to remove protocol conversions and the resulting overhead and latency. A scheduling interval of 1 ms means the round trip time on the eNodeB is in the order of at least twice this without even forwarding the packet to another node in the network. Add to this potential HARQ (Hybrid ARQ) retransmissions so you already end up at several milliseconds. Sure one could further reduce the length of the timeslices at the expense of additional overhead. But would it really help considering the many other routers between one device and another? Have a look at this great post of Don Brown and Stephen Wilkus which goes into the details.

GoGo Experience Over The Clouds – Without Certificate Forgery

Gogo-wifi-sign-onHere I am, over the clouds again and an interesting aspect of flying in the US is that they have Internet access on board on many of their flights. Here's how it worked for me while putting together this blog post:

On Delta, Internet over the clouds is provided by GoGoAir and I was getting download speeds between 1 and 3 Mbit/s with round trip times of around 90 ms without my VPN. With an OpenVPN tunnel to my gateway in Europe I got round trip delay times of around 260 ms, quite a good value as well. In the uplink direction I got around half a megabit per second out of the connection. Over the hour I used the system it was quite stable but there were temporary outages of 15-20 seconds every now and then and occasional long round trip times of several seconds while data only trickled in. Not sure why these things happen, cell edge or handover problems perhaps?

Wikipedia says that the system uses 160 ground base stations distributed over the continent and 'classic' EvDo 3G connectivity between the plane and the ground. That would be consistent with the speeds I've experienced but it could of course always be that traffic shaping is applied on a device basis and overall speeds could have been higher.

Web browsing felt snappy and just for the fun of it I dropped my VPN tunnel for a little while to see if Gogo still forges Google certificates for Youtube. It looks like the bad press around the issue has made them think about it again and I couldn't observe rogue certificates for Youtube anymore.

Today a 3G link to the ground might still be sufficient but with rising data traffic the system needs to be upgraded to a faster technology in the future. Let's see if ground based LTE will be the technology of choice for planes flying over ground rather than satellites which are the only choice over oceans for obvious reasons. Personally I'd prefer ground based communication, as using satellites in geostationary orbit results in very long round tip delay times.

So What Exactly Is 5G?

Now that 3GPP has officially started working on 5G, the time has come to put lofty ideas and cheap talk into practical specifications. I'm looking forward to this because I still find most ideas that are currently floating around too abstract and unrealistic. But vendors and the NGMN have started publishing whitepapers that give a bit more insight into where we are going. After reading the whitepapers of Ericsson, Nokia and the NGMN on the topic, here's my summary and my own thoughts:

Radio Technology Mix: All whitepapers agree that 5G will not be about a single radio technology vying for dominance but rather a technology mix. Current radio technologies such as LTE(-Advanced) in the cellular domain and Wi-Fi in the home and office domain will be further evolved and are part of the technology mix. New technologies should be specified to grasp the potential offered by using large chunks of so far unused spectrum above 6 GHz for communication over very short distances. Some technologies are already there today, take Wi-Fi 802.11ad as an example.

Virtually Latency Free: The "Tactile Internet" is a new buzzword which means tactile remote control of machines and instantaneous (virtually latency free) feedback. Here's a good description of the concept. Marketing managers are promising round trip delay times of a millisecond in future networks but forget to mention the constraints. But perhaps they have discovered Star Trek like subspace communication? More on this in a future post.

Ultra-Dense Deployments With Ultra-Cheap Radios: As all whitepapers I've read correctly point out, the only way to increase data rates is to shrink cell sizes. This goes hand in hand with using higher frequency bands above 6 GHz. That means that the number of (what we still call) 'base stations' has to grow by orders of magnitude. That in turn means that they have to become ultra-cheap to install, they must configure themselves without human intervention and operate at almost zero cost. Sounds like a nice challenge and perhaps it could be done by turning light bulbs into yocto base stations (nano, femto and other small units are already used…) for this to become a reality? But who's going to pay the extra money to put a transmitter into light bulbs, who's going to 'operate' the light bulb and should such connectivity be controlled or open to everyone? That's not only a technical question but will also require a totally different business model compared to network operators running a cellular network and installing infrastructure without involvement of their customers. Again, the light bulb comes to mind. Light bulbs and power cables that are installed by their owners not only to illuminate a certain area for them but also for others. So in addition to fundamentally new technology and fundamentally new business models it's also a fundamentally new psychological approach to providing connectivity. Perhaps it should be called "light-bulb connectivity"?

Lots of Devices Exchanging Little Data: Today's networks are optimized to handle a limited number of devices that transfer a significant amount of data. In the future there might well be many devices talking to each other or to servers on the network and exchange only very little data and only very infrequently. That means that a new approach is required to reduce the overhead required for devices to signal to the network where they are and that they are still available, perhaps beyond what 3GPP has already specified as part of the 'Machine Type Communication' (MTC) work item.

Local Interaction: Great ideas are floating around on radio technologies that would allow local interaction between devices. An example are cars communicating with each other and exchange information about their location, speed, direction, etc. Sounds like an interesting way to enable cars driving autonomously or to prevent accidents but might break the business model of making money by backhauling data.

Spectrum Licensing Scheme Shake-Up: Some whitepapers also point out that for higher frequencies it might not make a lot of sense to sell spectrum for exclusive use to network operators. After all, range is very limited and not everybody can be in the same place. So license-free or cooperative use might be more appropriate especially if a chunk of spectrum is not used for backhauling but only for local connectivity.

3GPP's Role: All of this makes me wonder a bit how 3GPP fits into the equation? After all it's an industry body where manufacturers and network operators are defining standards. In 5G, however, network operators are probably no longer in control of the 'last centimeter' devices and thus have no business model for that part of 5G. So unlike in 2G, 3G and 4G, 3GPP might not have all the answers and specifications required for 5G?

Summary

So here's my take on the situation: For 5G, everything needs to change and whenever the concept or a part of it is discussed one central question should be asked: Who is going to backhaul the massive amounts of data and how is that done? In 2G, 3G and 4G that question was very simple to answer over the last decades: Network operators are setting up base stations on rooftops and install equipment to backhaul that data over copper cables, fiber or radio. For 5G that simple answer will no longer work due to the massive increase in the number of radios and backhaul links required. Operators will no longer be able to do that on their own as we move from nodes that cover the last mile to nodes that only cover the last centimeters. That means we have to move to a 'lightbulb' model with all that this implies.

GPRS to LTE Reselection During Data Transfers – Part 2

In a comment to a previous blog post on new mobiles now supporting GPRS to LTE Reselection during ongoing data transfers there was a comment that this was a network and not a device feature. The answer to this is quite interesting so I decided to make a post out of the response rather than just post an answer in the comments section.

It's in the nature of 3GPP to have several options for a feature and this one is no exception. Reselection from GPRS to LTE during data transfer is optional, it can be implemented as a device only feature or the device can signal to the network that it is capable to make LTE measurements during an ongoing GPRS data transfer and let the network decide what to do. Which of these options are supported are sent to the network during the GPRS attach process: 

Message:  ATTACH REQUEST
Information Element: GERAN to E-UTRA support in GERAN packet transfer mode

Possible Values:

  • 0 0 (0): None
  • 0 1 (1): E-UTRAN Neighbour Cell measurements and MS autonomous cell reselection to E-UTRAN supported
  • 1 0 (2): CCN towards E-UTRAN, E-UTRAN Neighbour Cell measurement reporting and Network controlled cell reselection to E-UTRAN supported in addition to capabilities indicated by '01'
  • 1 1 (3): PS Handover to E-UTRAN supported in addition to capabilities indicated by '01' and '10'

I've checked a number of recent mobiles and all of them either don't support the feature at all or support the autonomous cell reselection option without involvement of the network (like it is the case for GSM to UMTS reselection today as well).

That doesn't mean there are no networks and mobiles that support a network variant but I wonder if network operators are really interested in the feature when the autonomous variant works quite well!?

For details see 3GPP TS 24.008, Table 10.5.146 and search for "GERAN to E-UTRA support in GERAN packet transfer mode".

SSD Endurance – Theory and Practice

Two years ago I estimated that my SSD would last around 30 years based on the number of re-write cycles and the theory laid out in an epic Anandtech article on the topic. That was the theory. Now we have real numbers based on a practical endurance test of several SSD drives made by TechReport.

According to the report, the Samsung SSD could take at least 100 TB before even first hints of wear could be detected. It still kept working for twice the amount of data written to it before it eventually stopped working. So based the 5 GB of data that I write to my SSD per day (have a look at the first link above of how I get to this value) my SSD would last me for 54 years based on the 100 TB value.

I should mention though, perhaps, that I already replaced the first SSD I bought after about a year and a half because it was full and I had to upgrade to a 1 TB model… 30 years, 54 years, it's all a bit academical with the amount of data I keep accumulating resulting in drive swaps to increase capacity…

3GPP’s Odyssey to 5G Has Begun

A couple of days ago, 3GPP has published a tentative timeline for their upcoming 5G technology standardization activities in the coming years. In other words, the first steps are now made from a "what could 5G be" to "what will 5G be".

The 3GPP timeline set for 5G pretty much starts today with the SA1 SMARTER Study item and extends well into 2020:

  • Today: SA1 SMARTER Study Item
  • September/December 2015: Radio Channel Modeling and kick-off of a RAN Study item on scope and requirements.
  • Feburary 2016: RAN Study Item to evaluate potential solutions
  • 2018: A RAN Work Item to specify the solutions agreed on that will extend into at least 2020.
  • LTE will continue to evolve over the timeframe as well as it is seen as an integral part of an overall 5G network architecture.

The first step to get from "what could it be" to "what will it be" might actually be the most difficult one as the ideas about what "5G could be" currently imply a fundamental conceptual change both in terms of technology and who does what in the value chain. I've taken a look at a couple of 5G whitepapers and will post a summary of my thoughts in an upcoming blog post.

No matter how this turns out in the next 5 years it's going to be an interesting Odyssey with lots of surprises along the way!

LTE Internet Access in the US For Travelers – With A Local SIM

Just around this time last year I wrote about 3G Roaming in the US with a Local SIM card that could be ordered from abroad before starting the trip. While it worked well, the main drawbacks were finding a mobile that would work on US frequency bands and the 'limitation' to UMTS. Also, the network kept dropping my VPN connection at random intervals. A year later, networks and offers have significantly advanced.

This time, I bought a T-Mobile US Prepaid SIM for Internet connectivity after arrival that would not only let me use their UMTS but also their LTE network. The cost for the SIM card was $15 and options that can be selected online range from $5 per day for 500 MB over $30 for 3GB for 30 days to $50 for 7 GB for 30 days. Not cheap but 'business traveler' affordable. Also, the SIM card is kept active for up to 365 days which is great if some time passes between trips to the US.

Speed wise I could easily reach data rates of 10-15 Mbit/s in downlink and 8 Mbit/s in uplink while my tethering device was camped on LTE band 4 (1700/2100 MHz) on a 10 MHz carrier at the hotel I stayed in Kansas City. I also noticed that another 5 MHz LTE carrier was on-air on band 2 (1900 MHz PCS). Reliability wise the network has also made a great step forward as I didn't notice a single VPN drop over the days.

Another thing that has significantly improved since last year is the availability of mobile devices sold in the EU that support some of the US LTE frequency bands. The iPhone supports a phenomenal 20 LTE bands and other devices, e.g. some from Sony, include support for LTE band 4 that is used by T-Mobile US and others. Here's an example from their German web presence. So if you travel to the US it's worth finding out which LTE bands are supported before you buy it.

All in all, the SIM has served me well and offers like this are another step in the right direction towards global affordable and fast Internet access.

How to Counter Hotel WiFi Deassociation Attacks

Recently the FCC made it crystal clear that Deassociation Attacks by hotel Wi-Fi installations to force their 'guests' using the hotel's Wi-Fi instead of tethering their equipment to their smartphones and tablets is illegal. That only applies to the US, of course, and despite it being a very effective move to aggravate customers it doesn't mean nobody else will be trying to use it in the future. But it turns out, there's an effective countermeasure against this, at least in the foreseeable future.

The attack vector used by such Wi-Fi installations is to send De-association management frames to devices connected to a hotspot other than that of a local venue. Unlike data frames which are encrypted and thus can't be forged, Wi-Fi management frames are sent in the clear and can thus be sent by anyone. To mitigate rogue de-associations and other attack vectors the 802.11w amendment to the Wi-Fi standard describes a way to also protect management frames which effectively counters such attacks.

There are many amendments to the Wi-Fi standards that have never been implemented and for a long time it looked like this was yet another one. But since July 2014 the Wi-Fi Association requires implementation of the protected management frames amendment in its Wi-Fi certification scheme when a device supports 802.11ac, the latest super high speed transmission mode as reported here, here and here. That's good news as this certification is required for the Wi-Fi logo on the sales packaging and as a precondition by many companies (such as mobile network operators) to sell a Wi-Fi capable device. Also, a growing number of access points and devices such as notebooks, smartphones and tablets support 802.11ac today and even more will do so in the future.

PmfI ran a quick trace of all access points in the neighborhood but didn't find any indication of the feature being supported in their beacon frames. As described here in detail and shown in the screenshot on the left there are two bits towards the end of the beacon frame that indicate to devices whether PMF (protected management frames) are supported or not. These indicator bits are also sent by devices during connection establishment so it's easy to find out if a device supports PMF. One source claimed that the Samsung S5 already supported it but when I traced the connection establishment both bits were set to 0. So at least my S5 does not or doesn't want to indicate this capability to an access point that itself doesn't support it. The result is perhaps a bit disappointing but not really surprising as the new rule just came into effect half a year ago. So I will have to get hold of devices certified after that date. I'll keep you posted.

The article linked above remarks that PMF counters all management frame attacks observed so far. One thing it can't protect against are attacks with Ready To Send / Clear To Send frames that include long reservation times for transmissions (up to 32 milliseconds). The good thing is, however, that such frames are not network specific and thus would not only slow down an attacked network but also the hotel Wi-Fi itself on the same channel. In other words this is no caveat for hotels that have a special treat for their customers…

GPRS to LTE Reselection During Data Transfers

It's standard practice for mobiles for many years now to implement GPRS to 3G reselection during data transfers. It might sound like a small and unimportant feature but in mobility scenarios it's very important when a subscriber losses 3G coverage during a data transfer and drops down to 2G. Without this feature the device would be stuck on the very slow 2G layer until the data transfer has finished. Quite to my surprise the same feature for LTE has only become available in mobile devices in the last year or so. My Galaxy S4 can still get stuck on 2G during data transfers or when I use it for tethering on the train and kind of 'rescues itself' to 3G rather than going back to LTE when both networks become available again. But on newer devices I have noticed that they now have the ability to also search for LTE during 2G transmission gaps and go back to LTE. That makes a real differences especially in areas where only 2G and LTE is available. In Germany that is the case in a lot of rural areas. A small feature but a big benefit to the traveling user. What's still outstanding, however, is 3G to LTE redirection/handover.