The Current LTE Spectrum Situation

I was recently asked by a friend how I see the current LTE spectrum assignments. I would have liked to give a simple answer but it is actually not quite straight forward. This is how I see it at the moment:

Europe:

  • 2.1 GHz band, used for 3G today but still a lot of unused capacity: Most likely the first band where 5 MHz LTE carriers will be deployed. No limitations from regulators, LTE can be deployed straight away.
  • 2.5 GHz spectrum auctions still outstanding in most countries. Good for going beyond 5 MHz carriers
  • 900 MHz band: Maybe some deployments of 5 MHz carriers or less. Good for in-house coverage but the band is heavily used for GSM today so it's difficult to clean up enough space for a meaningful LTE deployment without running into congestion issues. It might get better as more people get 3G phones and some of voice and data traffic currently running over GSM in the 900 MHz band will start flowing over 3G in the 2.1 GHz band. That reduces the load, hence, it might be possible for carriers to clear some spectrum for LTE, if they haven't opted for UMTS 900 deployment, which is available today.
  • 800 MHz band (Digital Dividend): Strong push in Europe at the moment to free the same bandwidth in all member countries. First trials have started to bring high speed Internet to rural areas with HSPA and LTE.

North America:

  • Verizon will deploy LTE in the 700 MHz band and has a single 10 MHz carrier available. That's not much. The spectrum has been assigned, so it can be deployed straight away.

Japan:

  • According to the presentation of NTT DoCoMo at the Mobile World Congress in Barcelona this year, they will also deploy LTE in the 2.1 GHz band, replacing one of their UMTS carriers with LTE at the beginning.

China:

  • Not sure

Unfortunately, it does not end here, 3GPP has lots of additional frequency bands defined for LTE. So fracturization is likely to increase. As always, comments, additions, etc. are very welcome!

Operator QoS for Skype & Co.

Recently, Nokia has announced that they will integrate Skype into the Nokia N97. Reactions, obviously, have been mixed. But I think the trend is difficult to stop, if not on this device it will be on another or in another way entirely. Some network operators have responded by announcing that they are thinking about introducing special tariffs which would include VoIP. But there is one thing over the top VoIP (i.e. non-operator circuit switched voice) doesn't have today, and that is the possibility to ensure the quality of service (i.e. latency, delay and jitter) especially over the air interface.

However, with a bit of imagination it wouldn't be too difficult to set this up. Here's one example of how it could work: In tariffs that take VoIP into account, the network could establish a secondary PDP context (UMTS) or a dedicated bearer (LTE) when it detects IP traffic of VoIP applications. This prioritizes the voice IP packets over other IP packets in the data stream of the user and also over IP packets of other users. Most mobile network operators already have deep packet inspection devices in their networks for all sorts of things and these could easily do the job.

I think it's an interesting technical possibility, let's see if somebody picks it up and puts it into commercial reality.

LTE – The First Global Cellular Standard – But Does it Matter?

Indeed on first thought, LTE will be the first global cellular standard in the future to which GSM, UMTS/HSPA, CDMA and potentially other cellular wireless technologies are likely to converge on. But does it really change anything?

Being a global standard does not necessarily mean all LTE capable devices can communicate with all networks around the globe. There are two main issues:

1) FDD and TDD Mode

While in most parts of the world, FDD (Frequency Division Duplex) will be the dominant air interface technology, TDD (Time Division Duplex) is pushed especially by China as an upgrade path for TD-SCDMA. So an FDD LTE device will not be able to use a TDD network and vice versa. With some luck, we might see devices that can do both FDD and TDD but nobody's really commenting on how feasible this really is. Only time will tell.

2) Two Dozen Different Frequency Bands

What's worse is the number of frequency bands are foreseen for LTE. In practice, this will mean that devices will be built for some but not all of those frequency bands. So it's nice to have a global standard but it's unlikely the mobile devices themselves will be usable on a global scale. The single 4G device working everywhere will remain a nice dream.

A Little Light At The End Of The Tunnel For Vendors

LTE being a global standard is a good thing for network equipment vendors. Most of the equipment will be the same including the base stations where only a few parts or modules are different to work on a different frequency band or operating mode (TDD/FDD).

Benefits For Network Operators

An economy of scale is created for networks operating on the
main LTE frequency bands (e.g. 900, 1800, 2100 and 2600 MHz). Most other frequency bands are only used by a few network operators so it's unlikely these will get the same prices from network vendors as their colleagues who use the mainstream bands. Also, the number of devices working outside the standard LTE bands are likely to be as limited as for UMTS/HSPA today. Just have a look of how many 3G devices are available for HSPA network in the U.S. compared to Europe.

Benefits For Users

For the users, I don't see a big change from the situation with HSPA today. Where the mainstream frequencies are used, there is a big choice of devices and this is likely to be the same with LTE. And network operators using less used frequency ranges will probably receive as few devices as those operating 3G network in such bands today.

What We Really Need

So what we really need is not only a global standard but also global frequency bands so everyone benefits the same. But, unfortunately, that's a dream that is very unlikely to come true anytime soon.

Voice – Bearer Aware, Bearer Adaptive or Bearer Agnostic?

It seems I am not the only one thinking quite positivity about Voice over LTE via Generic Access Network (VOLGA). Recently, Ajit Jaokar posted an interesting article in which he mentions that with VOLGA, the traditional circuit switched voice service becomes a bearer aware application, as it can choose between a 2G circuit bearer, a 3G circuit bearer and an IP based bearer over LTE. All seamlessly with handovers during the call with all bells and whistles attached!

An interesting way to look at it even more so as the bearer awareness does not come into play on the mobile device but actually in the network. This is because the controlling entity for the voice call, the mobile switching center (MSC), sits in the network and is informed by the network that a different bearer should be selected. It can then decide to go along, arrange for the network to prepare the handover and then instructs the device to make the jump.

So maybe VOLGA makes voice even more than bearer aware!? So far the term 'bearer aware' has mostly been used for applications being aware what kind of networks are available at a time and then make a choice as to which IP network to use or to stay put in case a network is available but the cost attached to it is too high to make the application feasible.

In the case of voice, however, the service can ensure continuity by jumping from one bearer to another. So terms like 'bearer adaptive' or maybe even 'bearer agnostic'  come to my mind, because that voice call will just work over any kind of network the device supports.

It could even work over the Wi-Fi you have at home if you extend the idea of VOLGA. Not for the moment, as the standard currently focuses on LTE, but in the future, who knows?

Satellite Internet on Thalys High Speed Trains – A Report

Thalys-Internet It's great when two high speed technologies come together: High speed trains running at over 300 km/h and high speed Internet access. Thalys, whose trains travel between Paris, Brussels, Amsterdam and Cologne has equipped all of their trains with satellite, Wi-Fi, UMTS and GPRS based high speed Internet access, accessible to passengers via standard Wi-Fi. When I recently traveled on one of those trains, I could hardly wait to get on board to test and use the system.

The picture on the left shows the satellite antenna installation on top of one of the coaches. It looks a bit odd on the otherwise very streamlined train but the round shape probably keeps the additional drag to a minimum. Nevertheless, I'd be interested in finding out how much extra energy is necessary to  push the train beyond 300 km/h due to that.

At 7 a.m. in the morning, throughput in both uplink and downlink between Paris and Brussels was tremendous. Speedtest.net reported downlink speeds of more than 10 MBit/s and more than 3 MBit/s in uplink direction! While the link dropped a number of times on the trip to Brussels it was only for a few seconds each so that was probably only apparent to an attentive observer like me running a data trickle in the background to detect just such occurrences. However, the outages were short enough that it didn't affect streaming applications once enough data was buffered. Watching a Youtube video, full screen and in HD quality worked just fine.

As all data is transferred via a satellite in a geostationary orbit, round trip delay times were in the region of 650 ms. While voice calls and even Skype video calls work well over the system the delay can be felt in the conversation. Loading a graphics intensive web page works quite well and fast but it feels a bit sluggish for a moment after clicking on a link or entering a web address before the download of the page starts. This is again due to the very high round trip delay time compared to other systems such as ADSL with a round trip delay time of 50 ms, or the 120 ms over a 3G connection. Having said all that, the experience is still great, especially taking into account that the countryside is passing by at 300+ km/h when looking out of the window while that HD video is streamed over the satellite.

The satellite connection has one real several imitation: In Europe, the geostationary satellite hangs close to the horizon, so it is not always possible during the trip to keep the connection. In such cases a ground based backup is used. In the Brussels main station, for example, Wi-Fi is (probably) used. Downlink speeds came close to 16 MBit/s and round trip delay times were lower than 50 ms. The tunnels around Brussels were covered as well, although I was not sure exactly what technology was used. In other places, especially in the hilly terrain between Belgium and Germany, the satellite connection doesn't work too well either, probably because the train winds its way through narrow valleys and many tunnels. GPRS and UMTS network in that region seem to be patchy at best so the experience on that part of the track wasn't too great.

In between I should also mention that I didn't find any services that were blocked. VoIP worked well, IM worked well and my IPSec based VPN also worked fine over the system.

In the evening I made the same trip in the other direction. It seems a lot more people were using the system in the evening as speeds were much slower than in the early morning. While I could still reach fantastic transmission peaks of 3 MBit/s in downlink and more than 1 MBit/s in uplink, I experienced continuous high packet loss and frequent connection outages in the range of minutes even on the flat terrain between Brussels and Paris. The bad weather and heavy rain might also have had something to do with it, it's difficult to tell from a single ride.

Summary:

Of course I had my expectations before trying the system. In most cases I found it to be much faster than I expected. Especially the main applications such as web browsing, e-mail and VPN tunneling to the company network worked fine. The system has its limitations in hilly areas and cities when there is no direct line of sight to the satellite. While the system automatically switches to GPRS or UMTS in such cases, it didn't work particularly well in many of those places areas, as they were probably not covered very well. It can work much better over 3G as I have experienced here. Overall, however, I was very impressed with the system and I think it's a great service!

Can One Deduce From Chipset Specs How Future Devices will Look Like?

…I was asked today. A clear opinion here: Yes and no.

Yes: When I reviewed some of the future chipsets for my recent book it was clear we are moving to processor speeds beyond 600 Mhz, built in camera hardware units of those chipsets supported resolutions of 10-12 megapixels and a touch panel interface was also part of the unit.

No: Such specs tell you nothing about the form factor of a future device. Examples: Just knowing that there is a touch interface tells you nothing of how usable the interface will be. A chip spec doesn't tell you the physical characteristics of the device, e.g. like will it have a hardware QWERTZ keypad, etc. Also, there are usually supporting chips around the chipset like GPS, motion sensors, compass, etc. How they are mixed and matched is not on the datasheet either.

One interesting domain I haven't yet too much looked into is the specs for the radio front end chips. This info would be very interesting to get an insight which technologies (GSM, UMTS, LTE, CDMA) can be supported in future devices, and, equally important, which frequency bands can be handled with a single front end chip. If you have some good references here, please consider leaving a comment.

About Open Innovation and External Input

A non-technical blog entry today addressing the question many have asked me before: Why am I running this blog?

Obviously, I like to write and I like to share stuff I have learnt or that I think about with other people. My thinking and learning, however, is not self contained, i.e. I don't just sit down in seclusion and wait for that light bulb over my head to light up. I am inspired by things I see, read and experience.

It's great, for example, to have full access to 3GPP standards, to the meeting reports, to the change requests, in short, to everything that goes on in standardization. If there's a doubt, these documents clear things up.

It's also the blogs out there on wireless topics that keep me thinking. By reading them, I challenge my thinking and end up with lots of "yes, and" or "yes, but" or "yes and what if" questions which lead to new insights, ideas, and often to new questions.

It's the people I meet at conferences such as ForumOxford, the Mobile World Congress, Mobile Mondays and others which equally challenge my thinking and give me new ideas, direct or indirect.

And last but not least, it's the books on mobile topics and which I sometimes review on this blog. All together, it's my personal open innovation.

So by writing this blog, I do not only have to clearly think things through before and while I write them down, but it's also about giving something back to the community from which I profit. So, thanks very much for reading this blog and especilly for your comments, both agreeing and disagreeing, they are a vital part of the process as well!

Application Aware OS’es – Knowing When To Drop Back to GPRS

Here's a thought about reversing the concept of bearer aware applications: What if the OS of a mobile device would monitor the traffic behavior of applications running on a mobile device and then reselect to the most suitable access network for the application mix currently used?

Let me give you a practical example: While only my e-mail client is checking my inbox every couple of minutes it would be much better if the mobile was using the 2G GPRS/EDGE network instead of a 3G network in order to conserve power and to improve the autonomy time. The problem is actually very real and has to do with 3G networks keeping the mobile in a much more power consuming state after each data transfer until it goes back into idle mode. For details see here.

So when the OS detects that only such periodic and small data transfers are ongoing it could attach to the GPRS network. This could work even if other applications, such as the web browser, still run in the background but aren't actively being used at the moment. If I then use the browser again to go to a new web page, the OS could then quickly go back to the 3G network.

The beauty of the approach would be that applications would not have to be modified. A little hook in the IP stack to monitor incoming and outgoing traffic and making a decision on that to reselect to a different access network, while keeping the IP address, shouldn't be to difficult to get into an OS. Maybe something for an Android programmer? Or will Apple or the Symbian foundation be faster?

Other alternatives to make mobile Internet access for background applications more power effective could be the package of Continuous Packet Connectivity features of HSPA+ (see here, here and here) or the new air interface of LTE. But if they are as efficient as GPRS/EDGE and how long it takes for them to become available remains to be seen.

Verizon, LTE and Over the Air SIM card provisioning

Verizon recently announced Gemalto and G&D as their partner for SIM cards and remote provisioning for their LTE rollout. Remote provisioning of SIM cards (e.g. change the list of preferred network operators, network name, etc.) has become pretty much common over the past couple of years but there might be a twist with Verizon and LTE:

Today, remote provisioning of SIM cards is done via SMS. When such special SMS messages are received by the mobile device it automatically forwards them to the SIM card for execution without interaction with the user. What I am not quite sure is how that will work over LTE, because SMS over LTE is not standardized. Of course it would be possible to use the "3GPP CS fallback" feature to 2G GSM or 3G UMTS to receive the SMS messages. However, in Verizon's case that might not be possible for two reasons:

  • Their legacy system is based on CDMA, which does not have SIM cards. Hence, the CDMA part of a mobile phone might not have the necessary standardized software to forward those data SMSes to the SIM card.
  • The current LTE specs of Verizon say nothing that LTE terminals have to have a CDMA part.

So I am not quite sure how over the air provisioning will work in practice in Verizon's case!? Has there been something standardized in LTE or CDMA for "native" remote provisioning of SIM cards? If you have more info, I'd be happy to hear from you!

LTE Advanced and Cooperative Network MIMO

In today's 3G networks, voice calls are often in the so called “soft handover state”, which means that the radio network controller sends and receives the data for a voice call to and from several base stations simultaneously. The mobile then receives the voice call data stream from all three cells simultaneously and combines the received signals. While this wastes some bandwidth on the backhaul, these soft handovers often help to reach mobiles at the cell edge, which receive one or more cells with a similar strength.

For HSPA data transmissions, however, the soft handover is not used as the base stations autonomously decide when to schedule data on the air interface and thus, it is pretty difficult to synchronize the transmission over the air interface of several cells. In LTE, such a direct cooperation between cells is also not foreseen for the moment, even though there might be some benefits for mobiles at the cell edge. For LTE Advanced, however, people seem to have started thinking about it again. Instead of sending the same data stream over two or more cells, however, they are thinking about a cooperative MIMO scheme, i.e. each base station sends a different data stream and the mobile then analyses each data stream separately. The result would be a higher throughput for that mobile.

As Moray Rumney points out in his recent book, though, such a cooperative MIMO scheme would be quite challenging to implement in the network. First, it would put quite a demand on backhaul capacity and second, data would have to be exchanged between the base station with a delay of a millisecond or less. O.k., it is still some years away and technology advances, but I tend to agree with him, that's quite challenging to do. What do you think?