HSDPA At The Edge – Between 50 And 800 kB/s – With 15 cm Difference

Cell-Edge-BehaviorAlready back in 2007 when I got my first HSDPA data card I was quite surprised how big the difference in terms of data transfer speed was when a mobile device's antenna was just moved by a few centimeters. In the meantime, better receivers and noise cancellation must surely have done away with this!? Turns out that this is not the case at all.

Recently, when being in the countryside with very weak network coverage, I managed to only get around 50 kbyte/s throughput with my 3G dongle and very unstable connections. But with some measurement equipment I finally found a spot in my room where I could get a sustainable and rock solid data rate of around 750 kbyte/s, i.e. around  6 Mbit/s (Dual Carrier 3G dongle…) in the downlink direction and around 3.5 Mbit/s in the uplink direction.

The screenshot on the left shows the remarkable difference (click on the picture to enlarge). The low throughput on the left half is in various spots in the room while the right throughput shows what is possible from one corner in the room. Just 15cm away and we are back at 50 kbyte/s. Remarkable!

Even with an old HSDPA category 8 dongle I could still get around 400 kbyte/s in the downlink direction. I did not quite expect that because that is more than half of what is possible with the dual carrier dongle, even though that category 8 stick does not have the same kind of interference cancellation techniques of the dual carrier stick. Again, something I did not expect.

First Numbers on VoLTE Power Consumption

Light Reading has recently posted an interesting article on how much power Voice over LTE calls draw compared to circuit switched calls today from a smartphone battery. In the report, Metrica Wireless is quoted with a power consumption measurement of a VoLTE call of 1358 mW compared to 680 mW of a circuit switched call over a CDMA network.

No further details are available so it's difficult to tell why the difference is quite significant in the device tested. It could be many factors such as a still non-optimized air interface use for voice over IP on both the network and the smartphone side. Also, there's an additional overhead in the smartphone itself for using the baseband processor for the radio transmission and the IP stack and voice compression on the application processor versus the standard circuit switched voice approach in which all processing is done in the baseband processor.

For people that mostly use their smartphones for non-voice related activities will probably not care very much. For those who make a lot of phone calls during the day, however, the difference will be quite notable.

The interesting thing to look out for now is by how much these initial values can be improved over time. And there is still some time left as there are only few VoLTE deployments and users so far and LTE network ubiquity in most countries is still far from GSM and UMTS.

Inter-Band LTE Carrier Aggregation only in 3GPP Release 11

While there is little incentive for European operators to think about LTE carrier aggregation at the moment, things might be different for US operators that have a couple of megahertz in one band and a couple of megahertz in another (relatively speaking to the large 20 MHz bands used in Europe). So I was a bit surprised when reading this whitepaper by Nomor research that points out that inter-band carrier aggregation is only introduced in 3GPP Release 11, which is not even yet out the door. In other words, we are at least 2 or 3 years away from mobile devices (and networks) that might actually be able to combine carriers in different bands. Very strange, I would have thought US carriers would have been more insisting to get this functionality sooner…

P.S.: The paper also contains interesting details on how a correct timing advance is ensured in inter-band carrier aggregations HetNet situations where antennas are distributed and connected via fiber links and timing advance to different component carriers can be different.

How Does A Closed Femto Cell Reject Users?

Femto cells have been a buzzword for many years in the industry. They might now have been renamed into small cells but in most countries they are not (yet) used. One way to use a femto cell is to patch coverage holes in private homes. Here, it may makes sense to restrict the use of the femto to family members. Other mobiles that see the femto and try to use it are then rejected and have to return to the macro network. There are several ways how that can be done and I always wondered which one is used in practice. Recently, I stumbled over this blog entry that describes an encounter with a femto in practice. This one used a Location Update Reject with Cause Code #13 (Roaming Not Allowed In This Location Area). This then triggers the return to the macro network. For further details, have a look at that post, it also contains other interesting thoughts about femtos in practice based on the authors observation.

Observation: Youtube Is Now HTTPS – But The Streams Are Not

When I watched a video on Youtube today I noticed that the page's URL was https://www.youtube.com…. Interesting, I thought, it's encrypted now! If the streams are encrytped too, that would have interesting implications for video caching and compression servers in some mobile networks as they would no longer be able to compress and scale videos.

So I ran a quick Wireshark trace to see if the streams themselves were encrypted, too. However, they were not. An interesting implication of this is that the user might get the impression that the session is secure. But as the videos are sent in the clear, it's actually not secure at all. From the outside, it is no longer possible to see what the user is searching for, but which videos are streamed are still visible and can be cached or modified or simply blocked.

As the unecrypted URL requests are requeted by the Flash player there's also no warning that there are "secure and non-secure elements on the web page", as browsers often display when web pages start mixing secure and non-secure content.

From this point of view, I am not sure that it is a good idea to use https for Youtube. It simply gives a wrong impression of security to the user…

Book Review: Practical Malware Analysis

A non-mobile book review today about a book that has taught me a lot about things that are quite relevant in mobile, too. After reading Zero Day about which I blogged here and after I had to come to the rescue to get rid of a malicious program on a friend's PC I decided it was time to learn more about the subject. After doing some research I got an ebook copy of 'Practical Malware Analysis – The Hands-on Guide To Dissecting Malicious Software' by Michael Sikorski and Andrew Honig.

After reading the first paragraphs I became instantly hooked and the material is well laid out from the simple to the complex. The authors first discuss static malware analysis which means having a look at a binary file from the outside with various tools without actually running it. All tools such as the virustotal website, strings, MD5 check, PEid, Dependency Walker, Resource Hacker and the IDAPro disassembler are available for free on the web.

Apart from the main goal about learning about how to detect and what to do about malicious programs, I was especially interested in the chapter on disassembly. It's been a long time since I last worked directly with assembly code and while I still knew the basics I looked forward to using the book also as a refresher course for this. Just looking at assembly programing from a theoretical point of view without having somthing practicable in mind for doing with it would not have been much fun. Malicious program analysis was the perfect use for an assembly language, processor internals and operating system details refresher.

The second part of the book then deals with dynamic analysis, i.e. actually running the malicious program to see what it does. My recent ventures into the virtual machine space paid out handily here as it's absolutely necessary to run malicious programs in a virtual machine with proper separation of host and guest operating system and separate Internet connectivity to ensure other hosts on the network can't be touched by the malware in case it decides to go viral. Also, a virtual machine comes in handy as snapshots of intermediate results can be saved and a clean environment can be restored after performing the analysis by simply deleting the snapshots. Again, all tools for dynamic analysis discussed in the book are freely available on the web.

The book also discusses how C and C++ code look like in assembly code. For me that was a highly interesting topic even outside of malware analysis as I always felt that this was kind of the missing link between my knowledge of higher level programming languages and the assembly world. Especially the inheritance part of C++ always had me puzzled of how that might look like in assembly code. All chapters, including this one has a learning section with sample code provided and it was often quite humbling to do the exercises after reading the chapter. It seemed so clear when reading about it but the real understanding came when actually doing the exercises and working with the code.

At some point I also started working on real malicious code, the stream to my email inbox supplies fresh samples that get past the network based malware scanner almost daily. With the tools and methods learned one can quickly see what the malware does, which files it creates, how it ensures that it is started automatically, how it calls home to the command and control server and how it downloads further malicious code. Once the virtual machine was infected it was also a good test bed to see how my arsenal of virus removal tools dealt with the issue and if all malicious files were found. Sometimes it was, sometimes it wasn't and only a try a week later with updated virus signatures removed the infection.

The hard part with real malicious programs is disassembling the code or running it in a debugger. All samples I got via email contained a multi-stage packer which helps the malware to better hide from antivirus software and also makes analysis of the code a lot harder. Some of the malware contained anti-debugging code which detects that it is looked at and then does something entirely different. Also, lots of packed code I was looking at also only used indirect function calls to the Windows API making it difficult to impossible to statically analyze it with a disassembler. All of these things are discussed in the book and in practice it takes a newcomer a lot of time to overcome.

Further topics discussed in the book, again including examples to dissect, are user space root-kits, kernel debugging, kernel root-kits, shellcode and 64 bit malware code. The book also goes into the details of how stack overflows are used to infect machines in the first place and also discussed countermeasures such as address space randomization and stack execution prevention. These make it harder to exploit a vulnerability but the book also discusses how black hats have found their way around these counter measures.

The one thing I was really surprised about, because I've never heard or seen this is how malicious programs run inside other running processes to hide themselves. This is called Process Injection and removes the Trojan horse completely from view. One real malware I examined copied itself into explorer.exe and the other one spawned a svchost.exe instance and lived in there. There are various methods how this can be done, again all described in the book and backed-up with sample code that can be analyzed and run for better understanding.

It's been a long review and I still haven't touched on all the points that I found interesting in the book. With some background into programming, Windows and how computers work in general, the book is easy to read and the example code sections always start with something easy and increase their difficulty towards the end. In other words a fully recommended read from a malicious code analysis point of view. If you want to learn more about how operating systems and computers work, looking at malicious code is just the practical thing for which you want to go through the general theory.

Before I close, some thoughts on technical books in ebook format vs. print: If I intended to read it only at home I would have ordered the print version. However, since I was traveling at the time and wanted to start with this topic right away I went for the Kindle version. While this was definitely beneficial for where and when I wanted to use it in terms of instant availability and not needing to carry a full book, I have to say that there's still a lot of room for improvement for reading a technical book on an ebook reader. Quickly jumping from one place in the book to another, going to the table of contents and back, taking notes and generally have a visual idea where some information might be found is very hard to come by in electronic version. I don't know if there is a perfect middle ground in the future but the ideal book for me doesn't weigh anything, is instantly available, i.e. downloadable, I own the binary file for lifetime, nobody can take it away anymore, it should be possible to jump through the book like in a print version combined with text search to find specific content, that would be it. We are still far away from this.

How Large Is A URA?

After my recent discovery that three mobile network operators in Germany now use Release 8 Fast Dormancy to reduce signaling overhead and power consumption of UMTS devices and  also to improve responsiveness to user input from a more power efficient state I wanted to dig a bit deeper to see how each one makes use of the feature. This post is about the network that uses URA-PCH instead of Cell-PCH like most networks do today as the more energy efficient state.

The difference between Cell-PCH and URA-PCH is that the mobile does not have to send a Cell-Update message whenever it moves from one cell to another. Instead, an update is only necessary when the mobile reselects to a cell that is in a different URA (UTRAN Registration Area, 3GPP TS 25.401). This saves signaling overhead and power when a mobile is located just between two cells and thus frequently switches between them. It is equally beneficial for fast moving users in cars or trains where mobiles select a new cell quite frequently, often several times a minute.

The big question for me was how large a URA actually is. Just one cell, several cells or even much bigger? As I am a frequent public transportation commuter, the answer was not too difficult to come by and actually quite surprising. On a 30 km trip on a train, all cells, and I estimate there are around 25-35 cells on that route (based on Cell Logger traces) are in the same UTRAN Registration area as I didn't observe a single update message being sent to the network. So an URA is quite large in that network, perhaps as large as a whole Location Area which usually includes a major city such as Cologne or Bonn and it's surroundings.

The downside, of course, is that for incoming data packets the mobile has to be paged not only in a single cell but in all cells of the URA. For devices not using the fast dormancy trigger mechanism on their side such as UMTS/LTE sticks, for example, this is not a big problem as the network timer to go to URA-PCH state (from Cell-FACH) is set to 30 seconds. For mobiles that are using the Release 8 Fast Dormancy Functionality (Signaling Connection Release Indication) things could be different. Triggering it too early could result in a state ping pong and frequent paging in all cells of the URA until all data has been exchanged. From my practical experience with the feature, however, that seldom happens.

To summarize: Using URA-PCH instead of Cell-PCH can be quite advantageous for network operators as no signaling required when the user moves. For the user URA-PCH has the advantage over Cell-PCH that less power is used for non user-data related signaling while they are in trains and cars. Let's see, perhaps it will be a growing trend.

Release 8 Fast Dormancy Now In 3 UMTS Network In Germany

Two and a half years ago I wrote a lengthy post about power consumption problems of smartphones and one remedy for it, referred to as 3GPP Release 8 Fast Dormancy. This feature enables the mobile device to inform the network that it thinks that no further data will be transferred for the moment and that it likes the radio link to be downgraded to a more energy efficient state. This way, the timeout period during which power consumption is high can be significantly be reduced. This is very beneficial in cases when only background traffic such as keep-alive pings and email push/pull services communicating with a server produce short bursts of traffic while the mobile is not actively used. Also, another benefit is that the connection is put into a semi-dormant state (Cell-PCH / URA-PCH, see the post linked above) from which it can be reactivated much more quickly than from a fully idle state. Shortly after that post one German network operator actually switched on the feature.

So when I recently made a check of the state of the networks in Germany I was very positively surprised that three out of four networks have the feature implemented and activated by now. Two of them switch the connection to the Cell-PCH state while one uses URA-PCH. Only one laggard remains, incidentally the least performing one in recent network comparisons.

So what's the difference between Cell-PCH and URA-PCH? In Cell-PCH state, the mobile needs to send a cell update message whenever it changes from one cell to the other so the network can send a paging message for incoming voice calls, SMS messages or IP packets via the right cell. When users are moving or are located just in between two cells this results in increased cell update signaling. URA-PCH on the other hand groups several cells into a common registration area (URA = UTRAN Registration Area) thus reducing the cell-update signaling. If this is better than Cell-PCH depends, of course, on how many cells are in an URA.

How Antennas Change Over Time

New-and-oldAntennas and base station sites, they can be seen everywhere these days but it's pretty difficult to see how they change over time. When I recently came home and looked out the window I could at first not  exactly say what it was but I had the impression that something had changed at the antenna site on the building at the opposite side of the street. But I couldn't quite figure out what was different. Then I remembered that I took a picture two years go so it was easy to compare.

And here's the result: The left part of the image (click to enlarge) shows how the antenna construction looks today and the right part shows how it looked like two years ago. Before the configuration was changed there were three antennas covering each sector. One antenna was installed on top and two antennas were mounted below closely side by side. Today, there's only a single antenna casing with at least two antennas inside as can be deducted by the number of cables at the bottom of the antenna casing. Furthermore, a second microwave antenna has been installed on the main mast below the one already used two years ago.

Quite a significant change and I can only speculate why it was done!? I am pretty sure the top antenna belonged to a different network operator than the lower antenna. So does this absence mean that this operator no longer uses the tower? It's likely as I am not aware of any antenna sharing deals between network operators. And how could the lower antennas have been changed at the same time as the upper antenna of presumably a different company was removed? Coincidence? Cooperation?

Questions over questions. But one thing is for sure: I don't remember my surroundings in as much detail as I always thought as otherwise I would have immediately noticed the missing top antenna instead of having to compare today's state with that of two years ago. That is interesting as well.

P.S.: Note that the sky is grey on both pictures. I'll let you draw your own conclusions…

The QUAM Story

Whenever I look at how the mobile space has developed in the US and how it could be so different to Europe, I easily forget that over here, very strange and incomprehensible things have happened as well. A very good case in point was the UMTS action a decade ago that yielded a total of around 50 billion Euros in spectrum license fees to the German government from the 6 winners of the auction.

While the 4 incumbents subsequently built their UMTS networks, the two new entries spectacularly failed and not only lost all those billions spend on licenses but also the little money that was left after the auction for actually building the networks. And it's not that the backers of those two newcomers should not have known better as those were Telefonica (Spain), Sonera (Finalnd) and France Telecom. The story of Quam, backed by Telefonica and Sonera can be found in this recent article. Google offers a handy translation to English here.

The article is very informative but I still have the same questions I had before: How could they have spent all those billions and then run out of money… Incomprehensible.