Radio Signaling Load of Background IP Applications

Here's a link to an interesting post on Mobile Europe on the impact of IP applications running in the background on wireless networks. In short, the message is that despite instant messengers, e-mail applications and other connected programs running in the background require relatively little bandwidth, they nevertheless have a significant impact on the overall radio link capacity. So why is that, a reader recently asked me?

Let's make a practical example: On my Nokia N95, I use the VoIP client over Wi-Fi a lot. The client works in the background and every now and then communicates with the SIP VoIP server in the network to let it know that it is still there and to keep the channel open for incoming messages. This requires very little bandwidth as there are only few messages sent, from an IP point of view, about 2 a minute. For the full details, have a look at this earlier post.

From a 3G cellular radio network perspective, however, things look a lot different. There are two possibilities: The network could keep the radio link to the mobile device open all the time. This, however, would drain the mobile's battery very quickly as the mobile constantly has to monitor the link for incoming data. Further, this would waste a lot of bandwidth, since a full air interface connection requires a frequent exchange of radio link quality messages between the mobile and the base station. In other words, there is lots of overhead monitoring and signaling going on while no data is transferred.

The other option, usually used today, is to set the mobile device into a lesser activity state. In practice that means that in case little activity is detected by the network, channels are used which are not as efficient, but do not require constant radio link quality measurement reports and dedicated channel resources. That's already a bit more efficient but still consumes a lot of energy on the mobile side. For details see my earlier post on the "FACH power consumption problem". Some networks, which are configured well, detect that only little data is transferred and keep that state. Other networks immediately jump to the full channel right away when data is exchanged again which requires lots of radio link signaling. Further, in UMTS, the channel switching is organized by the Radio Network Controller and not in the base station itself thus putting quite a high burden on a centralized network element.

So what does this mean in practice? In networks today, a single base station covers around 2000 mobile devices. Not a problem today, as with traditional wireless voice, there is no ongoing signaling between the device and the network while there is no call. With non-wireless optimized VoIP however, as described before, there are 2 messages exchanged per minute per device plus potentially further radio interface signaling for channel switching and radio link measurements. In other words, such background IP packets have a higher radio link capacity impact than their size suggests compared to big IP packets that are part of a time limited high bandwidth data flow, e.g. while transferring a web page.

Now multiply that background traffic by 2000 devices per base station (assuming for a moment a pure IP world, non optimized) and you get 66 messages a second that need to be transmitted. Many of these require state changes, thus creating additional signaling in the network. Add to that IM, e-mail, etc., and the number will rise further.

Now why is this different to fixed line networks? There are two reasons: First, in fixed line DSL networks, there is usually only a single household behind a DSL line with only a few devices creating background noise. Second, in fixed networks no additional overhead is required for managing a shared transmission resource, i.e. the air interface. In other words, a small packet just takes that amount of bandwidth on the cable, no less, no more.

To be clear: I am not saying this is a problem for wireless networks (yet), it's just a lot more traffic in the background than what there used to be and it requires more bandwidth on the air interface than their size suggests. Also, standards are addressing this change of application behavior, for example with UMTS enhancements such as Continuous Packet Connectivity, or in LTE with transferring the radio state management from a centralized network element directly into the base station.

In any case I guess we'll see such always-on applications over time to be optimized for mobile use, i.e. more push than poll and less keep-alive signaling. But that's probably not done to please network operators or to increase overall network capacity but to reduce power consumption on the mobile devices.

6 thoughts on “Radio Signaling Load of Background IP Applications”

  1. Martin, do you have an idea how such signaling works in UMA? I have UMA setup at home, with BlackBerry terminal logging on to the Orange.PL network via my WiFi/DSL line. It works flawlessly and does not drain the battery… which is kind of a surprise for me, as the phone has to stay permanently connected to the WiFi AP.

  2. Hi Szymon,

    good question, things are quite different in Wi-Fi compared to cellular. I’ll put my thoughts in a follow up post in the next couple of days.


  3. Hi Martin,
    What is the applicability of the ECM Idle mode in the presence of such applications that keep the UE active all the time?

  4. Hi Sharon,

    I guess you won’t see a lot of ECM idle for devices that act this way. But as I said in the post, I am sure applications will be optimized over time to extend battery life and thus won’t send signaling messages that often. Well, hopefully 🙂


  5. Hi Martin and Everybody else,

    I’m sorry for not letting this go, but I thought that since we got this far, better clear it completely.

    First, the Alcatel-Lucent 9900 WNG doesn’t solve the problem, it only brings it up, it works as a very smart sniffer. Is that right?

    Second, what it doesn’t make sense to me is that one of the culprits mentioned in Martin’s link to generate this behaviour is mobile email. I thought that mobile email is a “push” protocol (such as Blackberry), not a “poll” protocol (such as POP3). If there is no regular, “keep-alive”, type of packet traffic on the wireless interface then why would this wasteful behaviour be noted when using mobile email?

    Mr. Martin, would you have one last go at this and then I promisse I’ll let it die. 🙂

    Your explanation is well understood, that any protocols using short “keep-alive” (such as SIP Skype, IM etc) packets every now and then is wasteful on the battery and the air interface.


Comments are closed.