UMTS 900 Would Be Great For The Highlands

A random thought today: UMTS 900, the 3G variant running in the 900 MHz band so far mostly used for GSM is already used in a couple of countries like Finland and France to get 3G coverage to rural areas. Mobile devices for this frequency like from Nokia are also on the market already. The big difficulty with UMTS 900 is to find enough space in the narrow 900 MHz band for the 5 MHz carriers. However, I guess that is mostly an issue in cities and not so much for sparsely populated areas, like for example the Scottish highlands, where only a few GSM carriers are in active use and base stations are spaced 7 km apart or even more.

I know, the UK has so far not allowed the use of 3G in the 900 MHz band due to open market questions, but from a user's point of view I think it would be a great thing. In the mid and long term, however, I think the 900 MHz band will be opened up for other technologies, as even in cities, 3G and also LTE have/will have the big disadvantage compared to GSM of a much inferior in-house coverage. Going from a voice centric to an IP centric wireless network architecture, it seems like a natural evolution of things.

No EDGE in the UK

One thing that surprises me a bit about the UK wireless market is that despite being one of the most competitive in Europe, non of the mobile network operators have Vodafone does not have deployed EDGE in their its GSM/GPRS network. Some might argue that there is no necessity for it as 3G is the playground for mobile broadband these days. Not so fast I say though.

Take bigger cities for example like London. Millions of people must be out and about with their Blackberries, most of them still on GPRS/EDGE and not on 3G. They would surely benefit from an EDGE upgrade. Also, it would significantly increase capacity of the GPRS network as data is transferred more efficiently over the air. But capacity wise it doesn't seem necessary, as I haven't heard complains about slow Blackberry e-mail delivery in London yet.

Personally, I also often lock down my N95 to GSM only as web browsing with Opera Mini is very bandwidth efficient, it increases battery lifetime significantly and minimizes times the mobile looses coverage, e.g. when entering buildings and while I'm traveling by train. I also noticed no slowdowns in GPRS in London, which means that the current Vodafone GSM network capacity seems to cope well with 2G data traffic.

The Scottish highlands are the other extreme. Except in a few cities, there's no 3G coverage, and GSM base stations are spaced wide apart. That makes it a difficult terrain for broadband Internet. Again, Opera Mini performed very well on the GPRS only network but I had really wished for some EDGE so web browsing would have been possible as well.

But for the moment, it seems its not to be had. I wonder if integrated GSM/UTMS/LTE base station with a common backhaul link might change this in the future?

Snapshot from Paris Metro Cabling for GSM Coverage

Metro cabelage Here's a rare snapshot (for those of you interested in network details…) of a splitter/combiner for GSM coverage inside the Paris metro. An interesting detail: The component covers all frequency bands between 870 and 2170 MHz, i.e. both GSM bands are supported as well as UMTS. So while I don't know if UMTS capable antennas have been installed underground, at least the cabling and passive components seem to support 3G once they want to upgrade the underground system. So, how about it?

Orange in the Metro

Over the past year I have noticed that the Orange 2G and 3G network was getting slower and slower in the Paris metro, especially during rush hour. At some point it was almost unusable, with Opera Mini page load times exceeding 15 seconds. The strange thing was that it affected both the 2G and 3G network, so it's difficult to tell if this was due to an overload on the air interface or some other bottleneck in the system. Whatever it was, however, it has improved a lot lately. Opera Mini pages are now loading very quickly again and the e-mail client retrieves incoming messages in a flash. What ever you have done, dear Orange, it has worked. Or is it just that all the "Blackberries" are on vacation at the moment? Let's hope not…

MIMO Testing Challenges

Over at Betavine Witherwire there's an interesting post on the challenges of consistently testing multi-antenna devices which will shortly appear on the market. The author of the post mentions that even without MIMO, 3G network capacity could increase by 50% if all devices are equipped with multiple receive antennas and sophisticated noise cancellation algorithms. Obviously that also translates in higher throughput per device. Consequently, network operators are likely to be very interested in these developments and accurate testing of the performance enhancements is a must.

While many tests with mobile devices today are performed with the air interface simulated over a cable, that won't work that easily anymore for MIMO and receive diversity as the antennas in the device are effectively bypassed. It's the antennas and their location and shape inside the device, however, that will make the big difference. More details in the post linked to above.

So I wonder if it's possible to model the impact of the antennas by simulating their characteristics in addition to the signal path with a simulator box that sits on the cable between a real base station and the mobile device)!?

A formidable challenge and I look forward to what the guys in 3GPP RAN4 come up with.

Solar Powered GSM in the Dominican Republic

Flexenclosure In the past two years I've seen a number of companies at the Mobile World Congress working on solar and wind powered GSM base station solutions targeted at countries where the power grid is unreliable and many base stations are powered by diesel generators. Looks like the industry is now slowly moving from the concept phase to practice.

Apart from environmental issues, diesel generators need fuel which is sometimes very difficult and expensive to get to rural areas. So if solar or wind power can partly or fully supply a GSM base station with power, that's good for the bottom line and for the environment as well.

Here's a link to a press release over at TeleGeography that Orange Dominicana has started rolling out solar powered base stations. The article concedes at the end that its only 30 base stations for now, but it's a start.

The press release doesn't mention which solution is being used. At the World Congress, I've seen VNL for example, that develops very low power GSM base stations with limited range and Flexenclosure who work together with, among others, Ericsson (see picture on the left).

Relative Cost of Voice over GSM, UMTS and LTE

The other day, a reader asked whether it is true that a voice call over a UMTS circuit switched bearer is less expensive than over a packet switched UMTS bearer. Good question and I guess very difficult to answer as there are many parameters. But nevertheless, let's expand the question and put GSM and LTE on top.


In the GSM world things were simple at first. There's a 200 kHz carrier and you can squeeze 8 timeslots into it. On the main carrier of a cell 6 out of those 8 timeslots can be used for voice, on all others, all timeslots can carry one voice call. Further, the adjacent carrier can't be used due to overlap, so the carriers bandwith is effectively 400 kHz. To increase the number of calls, the network operator can use AMR half rate, theoretically doubling voice capacity. Here it starts to get difficult as a half rate channel should not be used under weak signal conditions, i.e. some calls should fall back to a full rate channel so more redundancy and error correction information can be added to prevent the call from dropping. Anyway, a full rate channel voice coded streams at 12 kbit/s in each direction. Add error detection and correction bits and you end up with around 28 kbit/s.

UMTS Circuit Switched

In terms of resource use, things are similar as in GSM. The AMR full rate codec streams at around 12 kbit/s and redundancy information is added. I'd say resource use is similar as in GSM.

UMTS / HSPA Packet Switched

Packet switched means Voice over IP. Here, things start to get difficult because what is VoIP in practice? There's no standard solution as in the wireless circuit switched domain so there are different possibilities.

Let's look at standard SIP first that uses the 64 kbit/s uncompressed PCM codec. Add IP overhead and you stream at 80 kbit/s in each direction. Quite a difference to the 12 kbit/s used in the circuit switched wireless network. But wait, it's 28 kbit/s due to error detection and correction. However, that has to be added to the 80 kbit/s as well but how much, that's difficult to say. That depends how far the user is away from the base station, i.e. which modulation and coding is used. So to get realistic values, you have to calculate with a traffic mix. But no matter how you calculate it, there's no way to bring the 80 kbit/s down to the circuit switched value.

Some SIP implementations also use AMR if they detect that both ends support it. That brings down the data rate to 12 kbit/s + IP overhead to a total of 32 kbit/s. For details see this post. Still three times more than 'native' AMR. For users very close to the base station not a lot of redundancy needs to be added so I think we could come pretty close to GSM or be even better. But then, you switch-on half rate AMR and GSM is doing better once again. You could do that in VoIP as well but the IP overhead won't go down and it's already 2/3rds of the total bandwidth for full rate AMR.

Better spectral efficiency could also help to some extent to compensate for higher VoIP data rates as mobiles close to the base station do not only require less error detection and correction bits in the stream but can also use a higher order modulation, thus making the transmission more efficient than GSM circuit switched. But again, that's only for some but not all mobile devices.

Something that works against VoIP efficiency over wireless networks are channel assignments. While circuit switched timeslots are only assigned at the beginning of the call, bandwidth for VoIP calls over HSPA needs to be frequently re-assigned. There were some efforts in 3GPP to reduce the need by using static assignments but it starts getting messy quite quickly here (HS-SCCH-less operation).

But wait, there's IP header compression in UTMS, at least in theory. In practice, however, it's not used as far as I know, so I won't put that into the equation.

Over the top VoIP such as Skype uses pretty bandwidth efficient codecs that are in a similar bandwidth requirement range as AMR. There are lots of VoIP systems that could be used over wireless as well but I don't know what kind of bandwidth needs they have so I won't discuss them here.


There's a real pressure with LTE to switch to VoIP and similar dependencies on features such as modulation and coding, signaling overhead, etc. as in UMTS will have an impact. Robust header compression will probably make it into LTE much faster than in UMTS, be it for IMS, for VOLGA, or for any other network operator voice solution that will be used.

The Calculations

The book from Hari Holma and Antti Toskala on UMTS/HSPA has some interesting calculation on VoIP capacity. Their conclusion is that UMTS packet switched voice capacity can easily exceed that of GSM – if, and that's the big if, all optimizations are present and switched-on. For over the top VOIP, however, it's unlikely that these conditions will be met.


So as you have seen VoIP over UMTS or LTE can be more or less efficient than circuit switched voice over GSM depending on how you look at it. So maybe the question for the future will not be on efficiency but if mobile network operators will in the future continue to be the main provider of wireless voice calls or if over the top voice providers will take a bigger share of the market for which radio network optimizations are not working as efficiently.

LTE and UMTS Air Interface Comparison

There's a very interesting blog entry over at the 3G and 4G Wireless Blog by Devendra Sharma on the differences between the LTE and UMTS air interface beyond just the physical layer. By and large he comes to the conclusion that the LTE air interface and its management is a lot simpler. I quite agree and hope that this translates into a significantly more efficient power management on the mobile side (see here) and improved handling of small bursts of data of background IP applications (see here and here). I guess only first implementations will tell how much it is really worth. I am looking forward to it.

Radio Signaling Load of Background IP Applications

Here's a link to an interesting post on Mobile Europe on the impact of IP applications running in the background on wireless networks. In short, the message is that despite instant messengers, e-mail applications and other connected programs running in the background require relatively little bandwidth, they nevertheless have a significant impact on the overall radio link capacity. So why is that, a reader recently asked me?

Let's make a practical example: On my Nokia N95, I use the VoIP client over Wi-Fi a lot. The client works in the background and every now and then communicates with the SIP VoIP server in the network to let it know that it is still there and to keep the channel open for incoming messages. This requires very little bandwidth as there are only few messages sent, from an IP point of view, about 2 a minute. For the full details, have a look at this earlier post.

From a 3G cellular radio network perspective, however, things look a lot different. There are two possibilities: The network could keep the radio link to the mobile device open all the time. This, however, would drain the mobile's battery very quickly as the mobile constantly has to monitor the link for incoming data. Further, this would waste a lot of bandwidth, since a full air interface connection requires a frequent exchange of radio link quality messages between the mobile and the base station. In other words, there is lots of overhead monitoring and signaling going on while no data is transferred.

The other option, usually used today, is to set the mobile device into a lesser activity state. In practice that means that in case little activity is detected by the network, channels are used which are not as efficient, but do not require constant radio link quality measurement reports and dedicated channel resources. That's already a bit more efficient but still consumes a lot of energy on the mobile side. For details see my earlier post on the "FACH power consumption problem". Some networks, which are configured well, detect that only little data is transferred and keep that state. Other networks immediately jump to the full channel right away when data is exchanged again which requires lots of radio link signaling. Further, in UMTS, the channel switching is organized by the Radio Network Controller and not in the base station itself thus putting quite a high burden on a centralized network element.

So what does this mean in practice? In networks today, a single base station covers around 2000 mobile devices. Not a problem today, as with traditional wireless voice, there is no ongoing signaling between the device and the network while there is no call. With non-wireless optimized VoIP however, as described before, there are 2 messages exchanged per minute per device plus potentially further radio interface signaling for channel switching and radio link measurements. In other words, such background IP packets have a higher radio link capacity impact than their size suggests compared to big IP packets that are part of a time limited high bandwidth data flow, e.g. while transferring a web page.

Now multiply that background traffic by 2000 devices per base station (assuming for a moment a pure IP world, non optimized) and you get 66 messages a second that need to be transmitted. Many of these require state changes, thus creating additional signaling in the network. Add to that IM, e-mail, etc., and the number will rise further.

Now why is this different to fixed line networks? There are two reasons: First, in fixed line DSL networks, there is usually only a single household behind a DSL line with only a few devices creating background noise. Second, in fixed networks no additional overhead is required for managing a shared transmission resource, i.e. the air interface. In other words, a small packet just takes that amount of bandwidth on the cable, no less, no more.

To be clear: I am not saying this is a problem for wireless networks (yet), it's just a lot more traffic in the background than what there used to be and it requires more bandwidth on the air interface than their size suggests. Also, standards are addressing this change of application behavior, for example with UMTS enhancements such as Continuous Packet Connectivity, or in LTE with transferring the radio state management from a centralized network element directly into the base station.

In any case I guess we'll see such always-on applications over time to be optimized for mobile use, i.e. more push than poll and less keep-alive signaling. But that's probably not done to please network operators or to increase overall network capacity but to reduce power consumption on the mobile devices.

How the LTE Core Network talks to UMTS and GSM

An important functionality that has to be in place when LTE networks are launched from day one is the ability for mobiles to roam from LTE to other types of radio access networks. In most parts of the world except for the US and Canada, that is UMTS and GSM. While doing some research on this topic as to how that works from a network point of view, all books I have come across so far point to the new S3, S4 and S12 interfaces between the 2G and 3G network nodes (the SGSN and RNC) and the LTE core network nodes (or the Evolved Packet Core (EPC) to be precise), i.e. the Mobility Management Entity (MME) and the Serving Gateway (S-GW).

One might be happy with this answer from a theoretical point of view but in practice this approach might be a bit problematic. As the functionality has to be there from day one, using the new interfaces means that the software of the 2G/3G SGSNs and RNCs need to be modified. Now one thing you don't want to do when introducing a new system is to fiddle with the system that is already in place as you've already go enough work at hand. So I was wondering if there was an alternative to introducing new interface, even if only for Inter-RAT (Inter Radio Access Technology) cell reselection triggered by measurements on the mobile side.

It turned out that there is. After some digging, annex D in 3GPP TS 23.401 provided the answer (sometimes I wonder what is more important, the specification text or the annexes…). Here, a network setup is described where the 2G and 3G SGSN is connected to the LTE world via the standard Gn interface (Gp in the roaming case) to the MME and the PDN-Gateway. To the SGSN, the MME looks like an SGSN and the PDN-Gatweay looks like the GGSN. No modifications are required on the 2G/3G side. On the LTE side, this means that both the MME and the PDN-Gateway have to implement the Gn / Gp interface. But that's something that has to be done on the new network nodes which means its not a problem from an real-live network introduction point of view. With the Gn / Gp interface support in place, the introduction of LTE and roaming between different radio access networks could be introduced as follows:

Cell Reselection Only at First

To make things simple, LTE networks are likely to be launched with only cell reselection mechanisms to 2G and 3G networks instead of full network controlled handover. That means that the mobile is responsible to monitor signal strengths of other radio networks when connected to LTE and autonomously decide to switch to GSM or UMTS when leaving the coverage area of the LTE network. When using the GSM or UMTS network the mobile also searches for neighboring LTE cells and switches back to the faster network once the opportunity presents itself (e.g. while no data is transmitted).

Handovers Follow Later

The advantage of cell reselection between different types of access networks is that they are simple and no additional functionality is required in the network. The downside is that when a network change is necessary while a data transfer is ongoing the mobile will either not attempt the change at all or the change results in an temporary interruption of the ongoing data transfer. The answer to the downside is to perform a network controlled handover between the different radio systems. This makes the change between access networks a lot smoother but requires changes in both the new and the old radio networks. On the GSM/UMTS side, the software of the base stations and radio network controllers have to be upgraded to instruct the mobile to also search for LTE cells while the mobile is active and to take the results into account in their existing handover mechanisms. As far as I can tell, no modifications are required in the SGSN, as transparent containers are used to transfer non-compatible radio network parameters between the different networks.

Packet Handovers Today

At this point I think it is interesting to note that packet handovers are already specified today for GPRS/EDGE to UMTS and vice versa. However, I haven't come across a network yet that has implemented this functionality. Maybe it is the speed difference between the two radio access networks that makes the effort undesirable. Between UMTS and LTE, however, such packet handovers might finally make sense as in many scenarios, the speed difference might not be that great.

The GGSN Oddity

One last thought: In annex D, the 2G/3G GGSN functionality is always taken over by the PDN-GW. That means that an LTE capable mobile should never use a 2G/3G only GGSN when first activating a PDP context in GPRS/EDGE or UMTS. If this was done I don't see how it would be possible to reselect to the LTE network later. This is due to the fact that the GGSN is the anchor point and can't change during the lifetime of the connection. If an "old" GGSN would be the anchor point, then the MME and S-GW would have to talk to the "old" GGSN after a cell reselection or handover from GPRS/EDGE or UMTS to LTE instead of a real PDN-GW. That's a bit odd and I don't see this described in the standards.

There are several ways how that could be achieved. Using a special APN for example that triggers the use of a combined GGSN/PDN-GW when the connection is established could be a possibility or the analysis of the IMEI (the equipment ID). While the first idea wouldn't require new software in the SGSN, the second one probably would and then there is always the chance that you miss some IMEI blocks in the list on the SGSN, especially for roamers, so it's probably not such a good idea after all. Another option would be to replace the GGSNs in the network or upgrade their software so they become combined GGSNs/PDN-GWs. However, there some risk involved in that so some network operators might be reluctant to do that at the beginning.

If you know more about this or have some other comments or questions in general, please leave a comment below.