MUROS Goes VAMOS And Becomes Interesting!

Back in 1991 when first GSM networks were launched and still today in many networks, the GSM world is simple when it comes to voice: 1 timeslot = 1 user. Some networks use half rate codecs in some situations, basically splitting a timeslot and use every alternate timeslot occasion for two users. This in effect doubles voice capacity in networks at somewhat the expense of voice quality. Back in 2008 I reported  about a new scheme coming up called MUROS (Multiple Users Reusing One Slot) which can cram up to 4 users in one timeslot.

At the time I perceived it as an interesting idea but thought it wouldn’t probably go very far as, and that was my assumption, MUROS would only work with new mobiles with special transceivers. Well, it turns out that is not the case as MUROS, now standardized in 3GPP and referred to as VAMOS (Voice services over Adaptive Multi-user channels on One Slot) also works with a significant number of mobiles already in the field.

In essence VAMOS works as follows: In a first instance, VAMOS can be combined with half rate channels, i.e. the starting point is using the AMR half-rate or AMR-Wideband half rate speech codecs which already results in two users sharing one timeslot. Nothing new here. VAMOS then extends this scheme by transmitting the combination of two signals at the same time over the same channel, each with a different sequence of training bits in the middle of the timeslot which is used by the receiver for channel estimation. Each of the two mobiles that receive the data stream at the same time use their knowledge of their individual training sequence to reconstruct their own part of the signal, effectively filtering away the second data stream as noise. Instead of today’s GMSK (Gaussian Minimum Shift Keying) modulation, VAMOS uses QPSK modulation in the downlink direction that approximates a GMSK signal when the “noise” (i.e. the second signal) is filtered away.

The scheme works best if the mobile has a Single Antenna Interference Cancellation (SAIC) receiver. I have found two papers on the Internet that describe this in more detail, one from NSN here and one from Ericsson here. From an implementation point of view the important thing is that for several years now, many mobiles have been equipped with SAIC receivers to improve their resistance against inter-cell interference, i.e. not even with VAMOS in mind. These receivers are also able to filter out local interference, a nice side benefit. The NSN paper, which must date back from 2009 mentions that there are already over 1 billion devices shipped with SAIC receivers. In other words, when VAMOS gets deployed one does not have to wait for special VAMOS capable devices to reach a critical mass before the benefits can be seen.

VAMOS in 3GPP has also specified extensions to mobile chipsets so they can be informed of the training sequence of the other signal not intended for them which further improves their ability to filter out the second signal. Also, one paper mentions that it is even possible to pair non-SAIC and SAIC/VAMOS mobiles on the same timeslot if the non-SAIC mobile gets more of the signal energy than the SAIC mobile, i.e. there is less noise from its point of view. And finally, there could be two users sharing a timeslot of legacy half rate operation and one using a full rate VAMOS channel which in effect allows three users sharing the timeslot or two half rate VAMOS channels and one full rate VAMOS channel.

Another way to look at this is that VAMOS exploits the fact that no matter how good or bad signal conditions are for a mobile device at any particular time, the voice channel always uses the same power and same modulation, thus in effect wasting spectrum. VAMOS doubles the number of bits sent per time and thus doubles the efficiency of spectrum usage. That means that VAMOS does not work everywhere in the cell. Especially in cell border areas where the signal strength is weaker and interference from neighboring cells is higher compared to the center of the cell the connection might have to fall back to normal GMSK modulation. In effect, VAMOS adds the channel adaptivity that was introduced many years ago with GPRS and then EDGE for packet switched data.

In the uplink direction, and that’s again an interesting twist, VAMOS uses the existing GMSK modulation scheme. In other words, no new transmitter elements are required in mobile devices. On the base station side, two antennas are required to tell the two GMSK transmissions from two devices apart, using Multi User MIMO (Multiple Input Multiple Output) algorithms.

In addition, the radio network has to monitor the signal quality in the uplink and in the downlink direction very carefully and change the timeslot configuration to single use in case the bit error rate becomes too high on a VAMOS channel for one of the users.

Altogether, a very interesting scheme. I can very well imagine that VAMOS is going to be used in emerging markets to expand capacity without additional hardware in the base stations. It should be noted, however, that additional calls on the air interface requires additional backhaul capacity, so activating VAMOS requires an upgrade of backhaul links. The papers also note that another benefit of using a timeslot for more than one or two users is the overall power reduction of the base stations per call. That is perhaps a good reason for countries in which diesel generators, solar panels and wind mills supply the electric power for base stations and every watt counts. In countries where network operators are measured in terms of voice quality, VAMOS might have a more difficult stance. This probably depends how much speech quality is impacted.

In a somewhat more distant future when GSM service could be much reduced in countries having deployed later generation systems and only serves M2M traffic (data), roamers and other devices not being 3G and perhaps LTE voice capable, one might perhaps even live with a somewhat degraded speech quality in return for freed-up bandwidth that can then be put to good use for UTMS and LTE.

So I think it’s quite likely that in both emerging and developed markets, this is not the last we have heard of VAMOS.

University Of Oxford Events This October

Like last year, October is going to be an exciting month for me as I'll be in Oxford for a couple of days for two events:

The first is my two day 'Beyond 3G – Bringing Networks, Terminals and the Web Together' course, which follows the lines of one of my books. I am very happy to co-present with Ajit Jaokar of Open Gardens and John Edwards of PicoChip who will bring in their great expertise from their angle of the industry. It's scheduled on October 26th/27th and you can find out more about the course via the link above. It would be great to see you there!

The second event which follows on October 28th is the Forum Oxford Mobile Apps and Technologies Conference, an event definitely not to be missed! I've attended and presented at the previous three conferences and I think it's THE mobile event of the year to visit!

In addition, there are two more courses that will run in the two days before the conference that might also be of interest to you: Tomi Ahonen will present his 'Mobile as 7th of the Mass Media' course on October 26 and 27th and Ajit Jaokar has a one day course on 'Designing Multiplatform Apps: TV, Web, Mobile and Automotive Platforms' on October 27th.

It's going to be a packed week and I am very much looking forward to it!

Outrage: Adobe Flash Installs Chrome During Security Update

Dear Adobe,

So far I was only mildly annoyed by having to click on a check box to confirm the T&Cs during Adobe Flash player security updates that seem to happen now on a more or less bi-weekly basis. Why is that necessary, everyone else seems to be able to do security updates quite nicely in the background? Anyway, it has just become a lot worse:

This morning while rushing through the latest Adobe Flash fix the installer suddenly proclaimed that Google's Chrome browser is being installed. WHAT!!!!!????? Why, where, when!? Running the same security update on another computer in the same manual way (after a Firefox security update) I noticed that the Adobe security fix web page contains a check box already checked by default which says that Google Chrome is going to be installed unless the box is un-ticked.

Adobe, why do you have to resort to these kind of things for the extra buck? Most companies are humble about their security issues and install updates in the background without fuss. You on the other hand make it a cumbersome, and now, even a worrisome event. I understand you need to make money. However, is it ethical to install 3rd party software by default during a security fix installation? Please think this through and also talk to your friends at Oracle that keep trying to install some sort of toolbar in my browsers during Java security updates unless you catch it before it starts. Reminds me of a post of Andrew Grill who's also fed up of such practices.

How About A 3GPP Cleanup Release?

It a normal thing in an evolving world: Some things are invented, documented and even standardized but because of one reason or the other they never see the light of day. GSM, UMTS and LTE, all of them part of the 3GPP standardization process, are no different. Today, there are many many features and options described in the specification documents that have never made it into real networks. Documents are bloated and when you are looking for something specific they are in the way of finding what you want. So how about having one 3GPP release dedicated to just one purpose: Clean up the specs and remove stuff that is irrelevant today!?

It's not that this never happens, take for example this GERAN report from 2009 in which it is noted that T-GSM 900 and PBCCH stuff was removed. Nevertheless I think the specifications would benefit from a somewhat larger effort. I guess it is unlikely to happen, though, because where's the immediate financial gain of cleaning up…

Wi-Fi After 802.11n: It’s 802.11ac

It has been relatively silent for a while on how Wi-Fi is going to develop now that the 802.11n standard has become widely accepted. But behind the scenes, companies in the IEEE are silently working on the next generation of the standard with first and interesting results.

Perhaps they have run out of single letters as the next version of the Wi-Fi specification will be called 802.11ac. Wikipedia contains an interesting entry on the enhancements and links to a current draft specification of the IEEE. The link is pretty interesting as it is the first time I see the IEEE publish drafts in public. Previously, things were kept inside the IEEE community until things were finished. A new openness?

Anyway, so here’s the features currently under development:

Wider channel bandwidths

The initial 802.11b, a and g standards were defined for channel bandwidths of 20 MHz. 802.11n then introduced channel bundling to 40 MHz. While this in theory doubles the available data rate, the issue especially in the 2.4 GHz band is that foremost in cities there many access points are on air and an enlargement to 40 MHz makes it even less likely that one can find an unused spot in the band.

As a consequence several access points use the same channel and hence capacity of a network becomes dependent on how much data is transferred in other networks on the same channel. If the channel used by other networks is fully overlapping, interference is limited but data can still not be transferred, while a packet is sent or received by another access point. The collision avoidance scheme makes sure that packets are seen by other access points and clients so lower speeds are not a consequence of interference but of the transmitters waiting to catch an opportunity to send their own packets to their own access points on the channel. There is also the scenario in which channels are only partly overlapping. In this case only a part of the channel used by an access point is used and the packets are therefore not detected.

This results in a reduction of throughput due to perceived interference as due to the only partial channel overlap the packets of the other network can not be correctly received and thus, collision avoidance does not work.

With 802.11ac, bandwidth aggregations of up to 80 MHz and 160 MHz are defined, making the issues described above even worse. In 2.4 GHz it's not even possible to aggregate 160 MHz as the overall channel is smaller than that. There is more spectrum available in the 5 GHz band but even here, 80 MHz or 160 MHz channel aggregations will be a stretch. Also, networks using the 5 GHz band have a more limited coverage area than on 2.4 GHz, as signal absorption through walls and by obstacles is much higher than on 2.4 GHz.

More MIMO Streams and Multi-User MIMO

802.11n introduced Multiple Input Multiple Output (MIMO) transmission, i.e. transmitting several independent spatial streams over the same channel simultaneously. 2, 3 and 4 antennas for the same number of spatial streams are defines so far and 802.11ac increases the number to up to 8 spatial streams. Enjoy 8 antennas on your access point and mobile devices.

Mobile devices can have fewer antennas an will consequently only be able to use the number of spatial streams in line with their number of antennas. 802.11ac, however, specifies Multi User MIMO, i.e. if the access point can handle more data streams than mobile devices, several devices can send their data simultaneously. In the other direction, the access point can send several MIMO streams to different devices simultaneously.

Even Higher Modulation

The current state of the art Wi-Fi uses up to 64-QAM modulation if access point and mobile device are close to each other. 64-QAM encodes 6 data bits per transmission step. 802.11ac takes this one step further to 256-QAM, i.e. transmitting 8 bits per transmission step. The coding rate for such transmissions, which is a measure for the number of user data bits to error detection and correction bits is 3/4 and 5/6.

Backwards compatibility

And, of course, a most important requirement is that 802.11ac devices and access points must be able to co-exist with their older 802.11 devices.

An Interesting Future

A marvelous challenge to put all of this in a specification. And even more of a challenge, I am sure, to put this into real devices and make it work in a backward compatible way. Things remain interesting!

5 Years Ago in Mobile

About 5 and a half years ago I started this blog and it's been a tremendous project ever since. One of the benefits that now appears is that I can look back and see what was "moving" me 5 years ago, giving interesting insights into how the mobile landscape has changed since. So here's what wrote about 5 years ago, in August 2006:

Smartphone Wi-Fi Sharing: Google introduced Wi-Fi sharing on their Android platform not too long ago. My first thoughts on the topic are back from August 2006. The Nokia N80 was one of the first if not the first phone with a Wi-Fi chip inside, without Wi-Fi sharing of course. At the time the discussion was more about whether Wi-Fi will survive in phones at all with some network operators being less than happy about it in the first place.

3G Roaming Issues: According to my notes I bought my first 3G phone in December 2004. One and a half years later, many 3G networks were launched and I made my first roaming experiences. At the time, there were still quite a number of interoperability issues between my mobile device and different networks I tried it in as documented in this post on "roaming pleasures with pitfalls". Since then things have improved a lot but there are still some quirks today as documented here in 2011.

3G Connection Sharing: One of the most popular blog entries every according to my statistics is this post on how to share a 3G connection with others via Windows XP. Now that Wi-Fi sharing becomes more common place on smartphones, the necessity for this is likely to diminish. But for years this approach has served me on many occasions.

3G Video Calls: Yes, video calling started to pick up in 2006 as reported here from an Italian supermarket. The trend didn't accelerate though for many reasons such as patchy 3G coverage and steep pricing but it has found its uses. With Skype on the desktop today, iPhone facetime, better 3G networks and a "different" pricing structure for calls (you pay for connectivity not for call duration) things might yet again another turn.

EDGE: Faster GPRS was on its way to networks helping me in many situations where 3G coverage could not be found.

US Spectrum Auctions: The AWS band (1700/2100 MHz) was on the block and T-Mobile bought quite a bit to launch their 3G service in the US. At the time I asked what the US government would do with the money. Looks like they did what everyone else did, the used it for other means than fostering the telecommunication landscape in their country.

Phone Software Update: My first phone that allowed software updates from home was the Siemens S45 and I made good use of it to improve the stability of GPRS, especially while roaming. In 2006, Nokia also added this functionality to their smartphones and I reported on updating my, at the time, brand new N70. This has since become a common phenomena, semi- or fully automatic updates of installed apps like on the PC is now the norm rather than the exception. It's another indication how the PC and mobile world are moving closer to each other.

2 Billion Mobile Users: The middle of the last decade was the time when mobile accelerated in developing countries. In 2006 2 billion subscriptions on the globe had been reached, up from one billion two and a half years earlier. Today, 5 years later we are well beyond the 5 billion mark and the number of subscriptions is still rising by a similar number as back in 2006 as per the "subscriber counter" at the bottom of the GSM Assocication web site. Incredible!

Will Smartphones Drive 3G Voice Adoption?

One thing I am wondering about when observing people in trains and restaurants in significant numbers now using smartphones to access Internet based services is what kind of effect his has on the shift of voice calls from GSM over to UMTS? Before the smartphone boom, I knew many people who bought a new UMTS capable mobile device but locking the device to 2G only to conserve battery power. I don't know too many people who do that anymore. When using a smartphone and Internet based services, locking the device to 2G for whatever reason is the last thing people want to do now. Consequently, voice calls that would have previously been made over the 2G network are now made over 3G and therefore reducing the load on the 2G network. The effect is likely countered to some degree by rising voice minutes per user per month in many countries but from a 2G/3G voice call distribution point of view I can very well imagine smartphones making a difference today.

The Moving Offload Challenge

Cellular offload to Wi-Fi is a hot topic these days in the industry but from an implementation point of view we are just at the beginning, especially on the mobile device side of things. Pretty much all smartphones today have a Wi-Fi interface in addition to their cellular connectivity so from a hardware point of view they are ready for offloading. Unfortunately, when switching from cellular to Wi-Fi, a number of things happen today when the Wi-Fi hotspot is public:

  • The IP address changes and the 3G connection is usually cut. In other words, ongoing connections are interrupted. Bad if you are watching a Youtube video for example.
  • The public Wi-Fi hotspot usually requires some form of authentication.
  • The coverage area of the Wi-Fi hotspot is rather small.
  • Data rates at the coverage edge are very low.
  • Sometimes, the backhaul of the Wi-Fi hotspot has a lower capacity resulting in lower hotspot speeds independent of the coverage situation than via the cellular network

While the user does not move, most of these issues do not really matter in practice. However, in real live, most subscribers are moving and here's a personal example where concurrent Wi-Fi / 3G connectivity becomes a problem:

When going and coming from work I usually use the time in the train to get some things done online with my netbook and Internet connectivity. As my data subscription also includes Wi-Fi connectivity via my network operator's Wi-Fi network I've initially set my netbook to auto-connect to these hotspots. In places where I know Wi-Fi connectivity exists I didn't bother to use my 3G stick but instead used the Wi-Fi because it is more convenient. But I figured out quite soon that this was not ideal. This is because when I am on the train, my notebook immediately recognizes Wi-Fi hotspots of my network operator in train stations and connectivity over the Wi-Fi interface gets precedence by the OS over the 3G connectivity. As a manual authentication procedure is required after connecting to the Wi-Fi hotspot, this in effect disconnects my 3G connectivity as all packets try to flow over the Wi-Fi link but can't because I haven't yet authenticated. Yes, one could automate that. However, it wouldn't help because after a minute, the train leaves the train station again and connectivity is lost. In this case the 3G connectivity is still there but all connections, that have just switched over to the Wi-Fi link are again broken.

Of course this could be fixed by having a piece of software on the mobile device and the network so the same IP address is used over both interfaces. But it's not here. Also, just the same IP address would not fix the issue of slower connectivity at the Wi-Fi coverage edge or in case the backhaul of the hotspot is under dimensioned. I have also seen in practice that at the Wi-Fi coverage edge, connectivity is still present but packets are no longer received in either direction. Again, software could help to fix this but we are a long way away from that, too.

Speaking of additional software on the mobile device, where should it be located? Today network operators deliver "connection manager" software with their data sticks that run on the operating system. Not everybody likes them as they might do more than some want and not all people can use them, such as me for example as I use Ubuntu. Another option is to have that software reside on the data stick which in addition to cellular connectivity could also contain a Wi-Fi chip. To the netbook or other device using such a stick or mini PCI card, it could just appear as a single network interface and everything is done internally. The downside from a user point of view to this is that this would bind the data stick or mini PCI card to a specific network operator due to the on-board software managing the switching between cellular and Wi-Fi. Not sure if that is a good idea either.

So my consequence out of all this is that I removed the auto-connection to the Wi-Fi hotspot network in my devices and only manually connect when I know I want to stay in a place for a longer time. But I have to do this by hand which is about as convenient as getting the 3G stick out of the pocket in the first place and connect it…

When Products Fail With Long Passwords

I have two Wi-Fi enabled printers in my network and both have a web server for configuration. So far, I didn't set a password on neither of them but I thought it might be a good idea to do so lately, with interesting results:

As I like long passwords for security reasons I chose a 20 digit password, which at first seemed to work. No error messages when setting the password. But when accessing the printers again, neither would allow me to log on with my 20 digit password!? After some trial and error I established that I could access my HP Photosmart C7280 when only using 16 digits of the initial 20. The same with my brand new Samsung ML-2525W which only let me back into the menu when I only used 18 digits of the original password. Now there are four things that are very wrong with this:

  1. The password length is too short.
  2. It seems the passwords themselves are stored and not a hash value, thus creating the problem. Very unsave to store the password and not a hash value by the way…
  3. Why was there no error message that the password was too long?
  4. There is no delay between two login events, so a brute force attack is possible.

If I were daring, I'd try special characters in the passwords now… But I spare myself the trouble.

Rise and Resurrection of the 2D Barcode?

2d-barcode 2D barcodes for mobile use have been on the horizon for at least half a decade. My first blog entry on the topic I could find with a quick search seems to be from 2006 and I have pretty much given up on the idea seeing a breakthrough anytime soon just this year. And just when I've put the idea out of my mind, they seem to be resurfacing quite massively. A case in point is the picutre on the left which I have recently taken in Cologne. They can't get any bigger than this, can they!? When looking a bit around in that neighborhood I noticed a few more 2D barcodes on a billboards and also restaurants (with links to their Facebook account or website). Looks like the advertisement industry keeps pushing.