There's a very interesting blog entry over at the 3G and 4G Wireless Blog by Devendra Sharma on the differences between the LTE and UMTS air interface beyond just the physical layer. By and large he comes to the conclusion that the LTE air interface and its management is a lot simpler. I quite agree and hope that this translates into a significantly more efficient power management on the mobile side (see here) and improved handling of small bursts of data of background IP applications (see here and here). I guess only first implementations will tell how much it is really worth. I am looking forward to it.
An important functionality that has to be in place when LTE networks are launched from day one is the ability for mobiles to roam from LTE to other types of radio access networks. In most parts of the world except for the US and Canada, that is UMTS and GSM. While doing some research on this topic as to how that works from a network point of view, all books I have come across so far point to the new S3, S4 and S12 interfaces between the 2G and 3G network nodes (the SGSN and RNC) and the LTE core network nodes (or the Evolved Packet Core (EPC) to be precise), i.e. the Mobility Management Entity (MME) and the Serving Gateway (S-GW).
One might be happy with this answer from a theoretical point of view but in practice this approach might be a bit problematic. As the functionality has to be there from day one, using the new interfaces means that the software of the 2G/3G SGSNs and RNCs need to be modified. Now one thing you don't want to do when introducing a new system is to fiddle with the system that is already in place as you've already go enough work at hand. So I was wondering if there was an alternative to introducing new interface, even if only for Inter-RAT (Inter Radio Access Technology) cell reselection triggered by measurements on the mobile side.
It turned out that there is. After some digging, annex D in 3GPP TS 23.401 provided the answer (sometimes I wonder what is more important, the specification text or the annexes…). Here, a network setup is described where the 2G and 3G SGSN is connected to the LTE world via the standard Gn interface (Gp in the roaming case) to the MME and the PDN-Gateway. To the SGSN, the MME looks like an SGSN and the PDN-Gatweay looks like the GGSN. No modifications are required on the 2G/3G side. On the LTE side, this means that both the MME and the PDN-Gateway have to implement the Gn / Gp interface. But that's something that has to be done on the new network nodes which means its not a problem from an real-live network introduction point of view. With the Gn / Gp interface support in place, the introduction of LTE and roaming between different radio access networks could be introduced as follows:
Cell Reselection Only at First
To make things simple, LTE networks are likely to be launched with only cell reselection mechanisms to 2G and 3G networks instead of full network controlled handover. That means that the mobile is responsible to monitor signal strengths of other radio networks when connected to LTE and autonomously decide to switch to GSM or UMTS when leaving the coverage area of the LTE network. When using the GSM or UMTS network the mobile also searches for neighboring LTE cells and switches back to the faster network once the opportunity presents itself (e.g. while no data is transmitted).
Handovers Follow Later
The advantage of cell reselection between different types of access networks is that they are simple and no additional functionality is required in the network. The downside is that when a network change is necessary while a data transfer is ongoing the mobile will either not attempt the change at all or the change results in an temporary interruption of the ongoing data transfer. The answer to the downside is to perform a network controlled handover between the different radio systems. This makes the change between access networks a lot smoother but requires changes in both the new and the old radio networks. On the GSM/UMTS side, the software of the base stations and radio network controllers have to be upgraded to instruct the mobile to also search for LTE cells while the mobile is active and to take the results into account in their existing handover mechanisms. As far as I can tell, no modifications are required in the SGSN, as transparent containers are used to transfer non-compatible radio network parameters between the different networks.
Packet Handovers Today
At this point I think it is interesting to note that packet handovers are already specified today for GPRS/EDGE to UMTS and vice versa. However, I haven't come across a network yet that has implemented this functionality. Maybe it is the speed difference between the two radio access networks that makes the effort undesirable. Between UMTS and LTE, however, such packet handovers might finally make sense as in many scenarios, the speed difference might not be that great.
The GGSN Oddity
One last thought: In annex D, the 2G/3G GGSN functionality is always taken over by the PDN-GW. That means that an LTE capable mobile should never use a 2G/3G only GGSN when first activating a PDP context in GPRS/EDGE or UMTS. If this was done I don't see how it would be possible to reselect to the LTE network later. This is due to the fact that the GGSN is the anchor point and can't change during the lifetime of the connection. If an "old" GGSN would be the anchor point, then the MME and S-GW would have to talk to the "old" GGSN after a cell reselection or handover from GPRS/EDGE or UMTS to LTE instead of a real PDN-GW. That's a bit odd and I don't see this described in the standards.
There are several ways how that could be achieved. Using a special APN for example that triggers the use of a combined GGSN/PDN-GW when the connection is established could be a possibility or the analysis of the IMEI (the equipment ID). While the first idea wouldn't require new software in the SGSN, the second one probably would and then there is always the chance that you miss some IMEI blocks in the list on the SGSN, especially for roamers, so it's probably not such a good idea after all. Another option would be to replace the GGSNs in the network or upgrade their software so they become combined GGSNs/PDN-GWs. However, there some risk involved in that so some network operators might be reluctant to do that at the beginning.
If you know more about this or have some other comments or questions in general, please leave a comment below.
This week, I've ventured far beyond my 'normal' 3G use by giving remote support to someone being connected with a notebook over a 3G link for over 8h at a time. During that time, we had a Skype voice session established with excellent audio quality, used Instant Messaging and e-mail to send and receive documents and I had a remote desktop session open to see what is going on and to directly lend a hand when necessary. All sessions were open simultaneously and there was not a single glitch with a single application or the 3G connection.
That's what I call network stability! During that time, around 300 Mbyte of data were exchanged. It's impressive to see that both networks and devices have matured to such a level. On the network side, Mobilkom Austria (A1) has to be congratulated for the stability and performance of their HSPA network and for offering Internet access with prepaid SIMs. On the terminal side, the Huawei E220 modem did it's part. Congratulations to all companies involved, it was a truely great experience!
In UMTS and HSPA, there are a number of different activity states on the air interface while data is exchanged with the network. During phases of high activity, the mobile device is usually put into dedicated state (Cell_DCH) and transmits/receives data on the high speed downlink shared channels and a dedicated uplink channel. During times of lower activity or to keep a physical connection open to resume data transfers quickly (e.g. the user clicks on a link after some time of inactivity) the network puts the connection into Cell_FACH (Forward Access Channel) state. While the FACH is quite slow, it reduces power consumption somewhat. However, not enough for all kinds of applications.
eMail Polling in 3G mode
While in Austria recently, I noticed that when using 3's UMTS network and Profilmail with a POP3 eMail polling interval of 5 minutes, my battery ran dry within 6 hours. Quite devastating and very short compared to GSM/GPRS/EDGE where the battery easily lasts a full day under the same conditions. With the help of Nokia's Energy Profiler I dwelled down to the bottom of the problem. It turns out that 3 leaves the air interface in DCH state for 20-25 seconds after the last data packet has been sent before putting it into the Cell_FACH state for 1 minute and 45 seconds. Afterwards, the air interface connection is put into Idle state. In Cell_DCH state, even if no data is transmitted, power consumption is around 1.5 watts. In Cell_FACH state, power consumption is still around 0.8 watts, while in idle state and backlight off, power consumption is "almost zero". Even if no eMail is sent/received, these values result in the radio being active for almost half the time of each 5 minute interval, resulting in an average power consumption "in the pocket" (i.e. backlight always off) of 0.5 watts on average. As the battery capacity is 4.4 Wh (that is watt hours), the result is that the battery is empty in just a couple of hours.
If noticed this behavior in 3G networks before but never in such an extreme. This is because most other 3 G networks I usually use have different activity timers. In most other networks, the Cell_DCH state is left after about 15 seconds and Cell_FACH after about 30-45 seconds. This of course decreases the browsing comfort because it often takes longer than 30-60 seconds to read a web page in which case the transition to from idle to Cell_DCH state takes longer than from Cell_FACH to Cell_DCH. On the other side, however, it increases the autonomy on a single battery charge.
eMail Polling in 2G mode
Polling eMails every 5 minutes while the mobile is locked to GPRS is much more efficient. Here, the mobile takes about 1.5 watts while communication is ongoing. However, power consumption goes down almost immediately after no data is sent or received. As a result the average power consumption is only 0.1 watts or only a fifth of the power consumption while in 3G mode.
Reducing the 3G timers to lower values is no option since it would have a negative impact on the users experience. Maybe the enhanced FACH, which is not yet implemented in devices and networks, will help somewhat in the future. When looking at the specifications, however, it looks like it mainly addresses capacity and not so much mobile device power consumption. So that remains to be seen.
Another possibility is to switch from the POP3 pull approach to a push approach where the server starts communicating with the device only when a new eMail has been received or very infrequently to keep the TCP session open. Not sure how Blackberries receive their email, but it would be interesting to experiment a bit. IMAP push would be another option but unfortunately, Profimail does not support that extension.
An interesting case in which the 2G air interface is superior to 3G. How LTE and WiMAX fare in the same scenario is also in interesting question. LTE, for example, has a different air interface state model compared to 3G. Here, only active and idle state exist and active mode timers can be set by the network dynamically in a way to reduce the mobile's average radio activity time to almost the same values as when being in idle state. That should reduce power consumption somewhat if the base stations are clever enough to adapt the timers based on the traffic pattern observed. We shall see…
With always on applications (think mobile eMail, IM, VoIP, etc.) on wireless devices, power consumption inevitably increases due to the constant exchange of TCP and UDP keep-alive messages to keep NAT firewalls open. Gone are the days in which wireless devices only communicated when there was really something to say. Pasi Eronen of the Nokia Research Center has taken a closer look at the issue and has measured and compared the impact of keep-alive messaging in 2G, 3G, 3.5G and Wifi networks. In the second part of the paper, Pasi then takes a look at how current VPN
security products could be enhanced to avoid frequent UDP keep-alive
messaging and thus increase the operating time of mobile devices. An interesting read, highly recommended!
Some of the findings:
- NAT timeouts for UDP are anywhere between 30 and 180 seconds
- NAT timeouts for TCP is anywhere between 30 and 60 minutes
- Sending a keep-alive packet every 20s increases power consumption by a factor of 10 and more
- The paper suggests that VPN products use a TCP connection to reestablish the UDP connection used for encrypted packets after a long timeout instead of sending frequent UDP keep-alives. Works well as long as no IM or VoIP client uses the VPN tunnel.
Back in 2000, most of us in the industry thought that by 2012 or so, GSM would be on a good way to become history in Europe and elsewhere, having been replaced by 3G and whatever came afterward. Now in 2008, it's clear that this won't be the case. About a year ago, I published an article to look at the reasons why this has not happened. With LTE now at the doorstep, however, it has to be asked how mobile operators especially in Europe can support three radio technologies (GSM, UMTS/HSPA and LTE) into the foreseeable future.
While over the next few years, many network operators will transition their customer base to 3G handsets and thus might be able to switch off GSM from that point of view, there are a number of factors that will make them think twice:
- There might still be a sizable market for customers who are not willing to spend a great deal on handsets. Fact is that additional hardware and licenses for combined GSM/UMTS prevent such handsets for becoming as cheap as very basic GSM only handsets.
- Operators are keen on roaming charges from subscribers with 2G only handsets, this is a very profitable business.
- Current 3G networks are transmitting on 2.1 GHz and as a result the inhouse coverage of 3G networks is much inferior to current GSM networks. Putting more base stations in place could help to some degree but it's unlikely to be a cost effective solution.
In other words, in order to switch GSM off (whenever that might be) a number of things need to fall in place first, i.e. needs to be part of an operator's strategy:
- 3G must be used on a wide scale in the 900 MHz band (or 850 MHz respectively in the US and elsewhere). This, however, requires new mobile devices as only few models currently support this band. At this point in time it is not clear if national regulators will allow the use of 3G networks in the 900 MHz band in all European countries because it has significant implications on the competition with other technologies. Note: 4G deployment in 900/850 MHz is unlikely to help due to the voice gap discussed here.
- An alternative could be that combined DSL/Wi-Fi/3G Femtos become very successful in the market, which could compensate for missing 900 MHz coverage. But I am a bit skeptical if they can become that successful.
- Most roamers would suddenly pop-up with 3G capable handsets. I don't see that happening in the near- to mid-term either due to many countries not going down the 3G route and even for 900 MHz. Also, roamers with mobiles from places such as North America use different 3G frequencies and thus would not work in Europe and elsewhere and of course vice versa. Maybe this will change over the next couple of years two, but except for data cards, I haven't seen a big push for putting 3G on 850/900/1700/1900/2100 into handhelds.
At some point, however, it might become less and less economical to run a full blown GSM network alongside UMTS/HSPA and LTE networks despite lucrative 2G roamers and better inhouse coverage on 900 MHz. I see several solutions to this:
- Since GSM traffic declines in favor of 3G it will be possible at some point to reduce the capacity of the GSM network. At this point, separate GSM, UMTS and LTE base station cabinets could be combined into a single box. Base station equipment keeps shrinking so it is conceivable that at some point the GSM portion of a base station will only take little space. By using a single antenna casing with several wideband antennas inside could keep the status quo in the number of antennas required to run three network technologies alongside each other. Cabling could also be kept fairly constant with techniques that combine the signal to/from the different antennas over a single feeder link. For details have a look at my post on the discussion I recently had with Kathrein.
- Maybe advances in software defined radio (SDR) will lift the separation in base station cabinets between the different radio technologies. Should this happen, one could keep GSM alive indefinitely. SDR is discussed in the industry for many years now. Since I am not a hardware/radio expert I can't judge if and when this might become part of mainstream base stations.
- And yet another interesting idea I heard recently is that at some point two or more operators in a country might think about combining their GSM activities and instead of running several networks, only a single GSM network is maintained by all parties involved . As this network is just in place to deal with the roamers and the super low ARPU users (and maybe still lacking inhouse coverage), it is unlikely that this network will be upgraded with new features over time, so it could be pretty much static. So running such a combined network might be a lot easier than running a combined 3G network to save costs.
So what is your opinion, which scenario is the likeliest?
WiMAX world recently published an interesting article by Caroline Gabriel on spectrum and auction issues for Wimax (and other wireless technologies). A very good read!
I find it very funny how time changes opinions. Some years back, BT couldn't get rid of their mobile branch soon enough. Now, they can't wait to buy spectrum and to start from scratch. Total insanity, but it reflects the reality in my opinion that in the future, only operators being able to offer fixed (via Wifi) + cellular wireless access will remain relevant.
So far, I always thought refarming 900 MHz frequencies was a good idea. After this article I understand the political dimension of this a bit better. I guess some operators are hoping that they can use their current spectrum indefinitely and for a very low price if they can escape an auction.
I guess this would be a major disadvantage for potential new entrants. 900 MHz is great for indoor coverage especially in cities, as even 3G coverage at 2.1 GHz fades away very quickly indoors. So if new entrants wouldn't have a chance to get such bands in the future, they would be at a constant disadvantage everywhere, not only in the countryside.
As a user on the other hand I don't want to wait until 2020 before I get 3G and 4G deep indoors without Wifi. Ugh, a tough call for regulators.
Concerning the first mover advantage and the claimed 18 months WiMAX lead over LTE: First, I think this lead is not really a lead, as it is debatable how much faster WiMAX is compared to current HSPA networks. Additionally I wonder if 802.16e is really ready for prime time. One year ago, three companies have bought nationwide licenses in the 3.6 GHz band in Germany. I haven't heard from them since doing anything beyond patchy deployments in a few places!?
In the meantime, 3G price plans have become available that give users several gigabytes of data per month for a couple of pounds. Should there be any first mover advantage, that's pretty much a show stopper in itself.
Sounds all a bit negative for WiMAX but I think there are still opportunities out there. The 3GPP operators are far away from doing everything right. Especially for those occasional users who just want to open their notebook no matter in which country they are and get access for some time without worrying about subscriptions, SIM cards, etc, this camp has not yet the right answer. And then, there are the countries that don't have 3G yet for various reasons such as India and China. In some countries, however, incumbents are starting to wake up. So hurry, WiMax before this one goes to them as well.
Some years ago, when I tested how long the battery of a mobile phone would last when a mobile device was connected to a 2G or 3G network (PDP context established) but not transferring any data for most of the time. At the time, the result was quite clear: I could almost watch almost in real time how the battery level decreased. Looks like things have changed pretty much in the meantime.
When repeating the test these days with a Nokia N95 and a Nokia N82, one being connected to an EDGE network and the other to a UMTS network over the course of the day while transferring almost no data, there seems no difference anymore to the device not being connected throughout the day. The picture on the left shows a screenshot of my N95 that was connected to an EDGE network throughout the day. Note that at the time the screenshot was taken, the mobile was also connected to a Wireless LAN network (i.e. some applications used the EDGE connection, others the Wifi connection). The same test with the N82 that was connected to a 3G network showed the same result.
Very good, one thing less to be concerned about! No more advice about disconnecting from the network due to the fear of running the battery into the ground quickly.
HSPA+ is about more than just higher data rates, it is also about enhancing the radio interface to allow more devices to simultaneously connect to the network in a more power efficient way. I’ve described most of those features in various blog entries in the past but it seems I have missed one feature: Enhanced Cell-FACH.
One of the challenges of always on Internet connectivity is that mobile devices or PCs running instant messaging applications, Voice over IP prgrams, push eMail and other connected programs are anything but silent even while these applications are just running in the background. Even if just one of those applications is running, the device transmits and receives several IP packets per minute to keep the connection to the servers on the Internet alive. This means that in most cases, the radio link to mobile devices is not in idle state for most of the time.
As keeping the mobile in a fully connected state while only little data is transfered is quite wasteful in terms of bandwidth and battery capacity. UMTS networks therefore usually set device into the so called Cell-FACH state, once they detect that there is only little activity. In this state, the device uses the random access channel to transmit IP packets in uplink and the Forward Access Channel (FACH) in downlink to receive IP packets.
This method is quite efficient for the mobile, since no power control is performed on those channels. Hence, there is no radio layer signaling overhead in this state, which leaves more air interface capacity for other devices and also saves battery capacity. For the network, however, managing more than a few mobiles per cell on the FACH is not as efficient, since the channel was never designed to function as an always on data pipe for a high number of devices.
This is where the Enhanced Cell-FACH extension comes in. Once mobiles support this feature and they are set into Cell-FACH state, their data packets are sent on a Highspeed Downlink Shared Channel (HS-DSCH) instead of the Forward Access Channel. This improves the efficiency of downlink transmissions and also speeds up a state transmission into dedicated state once more packets are transferred again. An application note by Rhode and Schwarz goes into the details in Chapter 6.
What puzzles me a bit at this point is two things:
- When will the feature become available?
- In Cell-FACH state, the mobile is identified via the Cell-Radio Network Temporary ID (C-RNTI). In theory, this is a 16 bit value, i.e. up to 65536 mobiles per cell could be in Cell-FACH state simultaneously. Strangely enough, most networks only seem to increase this value up to 0xFF (256) before the being reset back to 0. Anyone got any idea why?
I still remember that in the early days of GPRS, the main problem was to get mobile devices that could actually make use of the new network service. The story repeated itself with UMTS where where things became even worse. When UMTS first started, there were lots of networks around but no or only clunky mobile phones available for at least a year or so.
In the meantime it looks like the situation has reversed. Quite a number of 7.2 MBit/s HSPA devices are available, but only few networks yet support ten simultaneous downlink spreading codes and have the required backhaul capacity to the base station. With HSUPA it is quite similar. A number of devices, mainly USB sticks, are available on the market today, but most networks still lack support. And it’s not only in UMTS, where devices are far more capable then most networks today.
Even 2G mobiles now support features that most networks are lacking. The AMR (Adaptive Multi Rate) speech codec is a good example. Widely supported in handsets today, but only used in few networks today, despite the potential capacity increases the feature offers to operators. Or take DTM (Dual Transfer Mode), which enables simultaneous voice calls and Internet connectivity for GSM/GPRS/EDGE devices. Again, many mobiles support this today and it could be put into good use especially with feature phones. However, I haven’t seen a single network that supports it in practice.
A worrying trend. Are the standards bodies specifying too much?