Draft 802.11n Requires Access Points To Use A Single Channel Only In Case Overlapping Networks Are Detected

I am having a good time these days browsing through the current draft D2.00 of the 802.11n standard to find out about the details of the compromise reached in the IEEE working group for the new 100+ MBit/s Wifi standard. Besides MIMO, one of the corner stones of reaching speeds beyond 100 MBit/s on the application layer is to combine two standard 20 MHz channels and transmit on them simultaneously.

This is pretty difficult to impossible in the 2.4 GHz band which only has space for 3 independent 20 MHz networks or a single 802.11n 40 MHz network together with one 20 MHz legacy network (for details see here). In my Paris flat, for example, there are already 13 networks operating in this band, many using the same channels.

In such an environment, a ‘draft n’ compliant access point has no chance to use a 40 MHz channel as according to chapter 9.20.4 of the draft standard, an access point detecting frames of another network on it’s primary or secondary 20 MHz channel has to immediately deactivate the 40 MHz channel mode. Further, it has to remain in 20 MHz channel mode for at least 30 minutes after the last frame from a different network has been detected.

I guess the standard allows the access point to switch to another channel to avoid the detected network but in the 2.4 GHz ISM band there is only one alternative. So I wonder if some vendors have put an option into their settings that allows locking the access point to a 40 MHz channel!? Not that this would be very polite, or not cause any problems to other networks and one’s own if traffic of other networks is higher than an occasional traffic burst.

So if you have a ‘draft n’ network at home, what kind of access point do you have and does it allow locking operation to 40 MHz?

Deactivating the Vodfone Websession Compression Proxy

I am quite happy with Vodafone Germany’s Web Session offer that gives me fast 3G Internet access in most European countries and in some countries overseas. I’ve reported about this extensively here. One of the things that bothered me, however, was the automatic compression of pictures in web pages. This reduces the amount of data to be transmitted but in the times of HSDPA that’s not necessary anymore. When buying a PCMCIA card and the required software from Vodafone for the service there is an option in the software to deactivate the compression. If you buy a standalone prepaid SIM card however, things are a big more tricky.

One way to get around the compression is to use a VPN software that tunnels all traffic and thus Vodafone’s transparent HTTP proxy can not touch the pictures. In some circumstances, such a solution is not practicable or not even available to all users. So I searched a bit on the web to see if there are ways to deactivate the proxy without the Vodafone software. And indeed, there is! Here and here are two links to the original German articles that describe how the proxy can be instructed not to compress the picture. In essence this is done by including extra HTTP header lines in each page request which are picked up by the proxy and tell it not to compress the images. According to one article, this works for  Vodafone Germany and also for E-Plus, another German operator.

Header_modify_configuration_2To get these extra header lines into a request, an add-on called "Modify Headers" is required for Firefox. The add-on can be installed into the browser right from the Mozilla Add-On Web Page. Once installed, a new menu entry called "Modify Headers" is available in the "Tools" menu of Firefox. In the configuration tab, select "Always On: Enable Modify Headers when this window is closed". Afterwards, two new header fields have to be added manually. In the "Headers" tab, one new header called "Cache-Control" has to be created and another one called "Pragma". Both headers have to be set to contain "no-cache". That’s it!

Header_modify_headers
Restart Firefox and the nasty compression is gone. If you go to pages that have previously been loaded, they are probably still in the local cache and thus still look ugly. In that case, press "STRG" or "SHIFT" together with the reload button of Firefox and the images are refreshed to their non compressed state. Below are two screen shots of HTTP request packets traced with Wireshark that show how HTTP headers look before the tool is switched on and afterwards when they include the two additional header lines.

Http_headers_not_modified
Http_modified_headers

The 5 GHz El Dorado for Wifi

As a follow up to my recent entry on the growing number of Wifi networks in the apartment building I live in Paris I did a bit of research of how much bandwidth there is available in the 5 GHz range compared to the 2.4 GHz ISM band 99% of today’s Wifi networks use.

Before taking a closer look it should be noted that both the 2.4 GHz and the 5 GHz band for Wifi are controlled by national regulators. Thus values such as bandwidth and transmission power are country dependent. For this blog entry I’ve chosen to work with the values applicable for Germany.

Let’s look at the 2.4 GHz ISM band first: It ranges from roughly 2.4 GHz to 2.483 GHz. An 802.11b or 802.11g channel requires around 22 MHz of bandwidth which means that there can be at most 3 non overlapping Wifi networks in the 2.4 GHz range. In my Paris example, however, 13 networks share this frequency range. Things work o.k. as long as the individual networks don’t carry a lot of traffic as packets are always marked for which access point or client device they are intended. Packets sent on one network are received on devices of other networks using the same channel, too, but are simply ignored.

Nevertheless, it’s obvious that the performance of individual networks on the same channel won’t be great once more than one carries video streaming and other bandwidth hungry applications. With the new 802.11n standard, things become even worse. To reach ever higher transmission speeds it’s possible to double the channel bandwidth compared to current 11b or 11g networks. In the 2.4 GHz ISM band this means that it’s not even possible to squeeze in two such networks in a non overlapping fashion.

The only way out of this is to put some of the traffic into the 5 GHz band. Compared to the 70-80 MHz available in the 2.4 GHz band, there’s 455 MHz available for unlicensed wireless networks in Germany (see German RegTP info PDF here). The band spans the frequency ranges of 5,15 to 5,35 GHz (200 MHz) and from 5,47 to 5,725 GHz (255 MHz). Consequently, around 18 single channel 802.11n Wifi networks can co-exist in this space or around 9 that use a double channel. The standard and regulatory requirements also foresee that the networks dynamically select an appropriate channel based on interference encountered. This is required to prevent Wifi networks from interfering with other applications such as military radar. It also has the nice benefit of removing the necessity for users to select a channel.

Disadvantages of Using the 5 GHz band

Unfortunately, there are also two disadvantages in using the 5 GHz band. The higher the frequency the shorter the range with a certain power level. In the 5 GHz band a Wifi device must therefore transmit with a higher power level to cover the same distance compared to the 2.4 GHz band. The power level, however, is restricted: In the 5250 to 5350 MHz band (4 channels) power output is limited to 10 mW / MHz, which translates into around 200 mW per Wifi channel. Between 5.470 – 5.725 GHz (about 11 channels) power output is limited to 800 mW. Since most 2.4 GHz Wifi equipment today transmits at less than 50 mW, this is probably not going to be a big problem.

The second disadvantage of using the 5 GHz band is price. Devices supporting the band must also support the 2.4 GHz band for backwards compatability. Access points should even support both bands simultaneously to serve both legacy and new high speed devices. So the question is how much more 5 GHz power amplifiers will cost compared to 2.4 GHz amplifiers and if combined 2.4/5 GHz chips become available soon. Apple’s airport is one of the first mass market access points that makes use of both the 2.4  and 5 GHz bands. Current retail price is $179.- I’d say that looks pretty promising already as prices will surely go down over time.

HSDPA On A High Speed Train – Part 2

In the previous blog entry (here) I have started to report my experiences while using Vodafone Germany’s 3G HSDPA network on board of a high speed train from Saarbrücken to Frankfurt. On this line the 3G network experience is quite positive and for the general remarks see my first entry. In this second entry I’ll now show some information retrieved from Wireshark traces I took during the ride. They reveal stuff that is very difficult to simulate in the lab without special equipment. In short, a treasure chest for the TCP researcher.

200kmh_throughput
All figures shown in this blog entry were made at a train speed of about 200 km/h and come from a trace of a 6MB file download. Figure one on the left shows the throughput during the file download. Total transmission for the file was about 75 seconds, top throughput about 1.5 MBit/s and average throughput about 800 kbit/s. During the file download three transmission outages can be seen at 25s, 43s and 64s. Each of them lasted about about 2.3 seconds. These timeouts where either caused by a handover or by very bad network coverage at these times.

Tcp_retransmission
The trace behind the graph reveals that despite the outage no TCP packets where lost so obviously the RLC layer of the radio network recovered all packets. On the TCP layer, however, such prolonged outage times without acknowledgments provoked a TCP timeout resulting in the automatic retransmission of about 15 kBytes of data (11 packets). This is shown in figure 2 on the left. Packet 2796 marked in black is the last packet received before the outage at 23.52s. Communication resumes at 25.76 seconds and it can be seen that no packet is missing by looking at the ACK numbers. The green packets that follow must thus have been saved by the RNC in the radio network. Starting with packet 2809 the sender suddenly retransmits packets (the black block in figure 2) that have already been received and ACK’d after the outage. However, the ACK’s were not received by the sender of the data in time which provoked the TCP timeout and the automatic retransmission.

200kmh_round_trip
Figure 3 on the left shows the packet interspacing diagram for the file download which tells a lot about HSDPA HARQ (Hybrid Automatic Retransmission Requests) operation on layer 2 of the air interface between the Node-B and the mobile. I was quite surprised to see that even at such high user speeds most packets where delivered without retransmission, and only about 20 – 30% going through one retransmission. This is quite similar to the traces I made when not moving as shown for example in this blog entry.

Summary

The trace discussed above shows impressively that high speed Internet access on board high speed trains without any on board equipment is possible. The radio network used for the test was certainly not optimized to give the best coverage along the railway. Nevertheless, overall throughput and recovery behavior on the TCP layer are impressive.

3G and HSDPA Internet Access On A High Speed Train

So far I’ve tested HSDPA all across Europe and have enthusiastically reported the great results on this blog (see here). Usually, I use HSDPA in a nomadic fashion, i.e. while being at home, in hotel rooms, on customer sites, etc. This is simple for the network, not much mobility management, no handover, stable radio environments, thus not much of a challenge. But how does HSDPA perform on a high speed train? I didn’t know until recently when I took a German ICE high speed train on the brand new LGV Est Européenne (Ligne à Grande Vitesse) from Paris to Frankfurt.

From a radio point of view the line is kind of black and white. On the French side, UMTS / HSDPA radio coverage is almost non existent. Even while still in Paris, my mobile frequently lost the 3G network once we got out of the train station. Once on the German part of the track, however, I had HSDPA coverage about 70% of the time during the 2h trip between Saarbrücken and Frankfurt. During the rest of the time, my connection fell back to the 2G network and there were only very few places without any network coverage at all.

The Network Under Test: Vodafone Germany’s 3.5G network

The Test Equipment: No fancy stuff, just a notebook and a Motorola V3xx HSDPA category 6 mobile phone, bought back in Rome a couple of weeks ago.

The Result:

During the 2 hours I ran a lot of throughput tests and downloaded around 75 MB of data. The Train speed during most of the tests ranged between 150 and 200 km/h. Very surprisingly, speed did not seem to have a great impact on the data rates. No matter how fast the train was going I always got peak data rates of about 1.5 MBit/s while radio coverage was good.

As there was no dedicated 3G radio coverage for the track there were of course also periods during which radio coverage was poor. Here, data rates dropped but were still at a respectable level of around 250 – 500 kbit/s.

I was also very positively surprised of the handover performance. Shortly before a handover occurred, radio conditions usually got quite bad so the file downloads slowed considerably. Then there’s a gap of around 1 or 2 seconds before the situation improves and the transfer speeds recovered within a few seconds. Downloads of 6 MByte files had an average throughput according to Wireshark statistics of 850 kbit/s with peak data rates of around 1.5 MBit/s. Not a single download failed!

To get a better feeling for the handover behavior I checked the link stability during handovers by sending pings to the network. Packet loss was minimal and seldom were two ping responses lost in a row which would have pointed to a prolonged network outage. To see how many packets are lost during handover I set the ping timeout to 500 ms. Here, single packet loss started to increase which points to a connection interruption during handovers or multiple failed RLC retransmissions during bad radio conditions. Most packets that were reported as lost due to the reduced timeout where nevertheless delivered, just a bit too late to be counted as a valid response. A Wireshark trace revealed that almost all ping responses eventually made it back to the notebook. This test indicates therefore, that HSDPA handovers take between 0.5 and 1 seconds. Sounds almost too good to be true. When analyzing some of the Wireshark traces in which I recorded the throughput tests, however, I could see that at some points the radio connection was lost for about 2.5 seconds (more about this in the next episode). Whether this was due to handovers or simply very bad radio conditions is difficult to say. But even if this can be attributed to handovers only it’s not too bad for a start. It’s probably also important to point out that it’s still early days for HSDPA and optimal handover performance was surely not very high on the R&D agenda so far.

File downloads and ping experiments are not the typical network usage so I also tested sending and receiving eMails and web browsing. I have to say I felt little to no difference in page download times compared while moving at high speed in a train compared to sitting at my desk at home.

Also worth mentioning is the software stability of the Motorola V3xx mobile. While most other mobiles I used in the past have sooner or later become confused by the many handovers and 2G/3G network changes with an active Internet connection, the V3xx was rock stable. Not a single reboot was required during the whole trip and the mobile even performed 2G to 3G network reselections during file transfers.

Apart from the good HSDPA performance, Vodafone has made a good job engineering their network between Saarbrücken and Frankfurt. During the two hours the Internet connection did not drop once  (e.g. due to missing datafill on the SGSNs for intersystem handovers). This is rather exceptional as on other lines, like for example between Munich and Stuttgart, my Internet connection usually drops a couple of times. So whoever did the network verification along that track, please Vodafone, send him to optimize the Munich – Stuttgart line as well. And while you are at it, install additional 3G base stations along the line, I’d really appreciate the same performance as between Saarbrücken and Frankfurt.

Summary

Before doing the tests I was a bit skeptical about the outcome. The good results, however, speak for themselves and certainly answered a lot of questions concerning high speed Internet access on high speed trains. The results also indicate that dedicated 3G train line coverage would fill the gaps observed and result in a very smooth user experience independent of train speed and also without any on board equipment such as 3G/Wifi bridges.

Stay tuned for the technical deep dive once I have analyzed the Wireshark trace I took in more detail.

When Is GSM Going To Be Switched Off?

Back in 2002 the verdict on GSM from most was pretty clear. GSM just celebrated it’s 10th birthday in the real world, UMTS was at the doorstep and looking at lifetimes of analog wireless system it seemed certain that in another ten years (2012) GSM would be a thing of the past. Well, today 2012 is just 5 years away and I think GSM in Europe will stay much longer than that.

So what has changed then since 2002? I think quite a number of things:

Equipment Refresh: In 2002, GSM equipment started to age a bit as the hardware used in the network did not change a whole lot. But since then virtually all network vendors have completely refreshed their network equipment from base station to core network router. This was not only a desire but a straight forward necessity as the parts for aging designs (e.g. 486 processors) were no longer available at reasonable cost. Hardware evolution also meant lower prices. GSM Base Station Controllers sold today, for example, are no less capable than the latest 3G Radio Network Controllers in terms of processing power, memory or storage capacity. GSM Base Station prices and sizes also keep shrinking and shrinking so networks become cheaper and cheaper.

New Entrants: Another reason for refreshing aging hardware designs were surely also Chinese companies like Huawei and ZTE entering the GSM and 3G market with new hardware and lower prices so established vendors could not afford to continue selling expensive hardware.

New Markets: I think only back in 2002 it was not clear to most that GSM would have such a tremendous success in emerging economies in Asia, India and Africa. Compared to the 2.5 billion or so GSM subscribers there are today, the few (hundred million) 3G subscribers almost seem like a single drop of water in the ocean. This created economies of scale beyond anything imagined at first.

Continuous Evolution: Back in 2002, it was assumed that most R&D would be put into the development of 3G networks. This has been true to a certain extent but instead of being dormant, GSM has continued to evolve. Compared to 2002, GSM hardware is much more efficient due the technical and economical hardware refresh described above and new features such as EDGE for higher packet switched data rates have pushed the GSM standard far beyond the circuit switched network it was once designed as.

Network Refresh: Just like the PC at a consumers desk, network equipment such as base stations, controllers, switches and routers have a limited lifetime and need to be replaced. The cycle is a bit longer than the 2 or 3 years for consumer PCs but after 10 years or so, base stations have to be replaced because of aging components or due to their inability to support new features such as EDGE. Also, their power consumption is much higher than that of new base stations so at some point the price of replacing a base station is absorbed quickly by reduced operational costs.

3G Networks Coverage: Even in the most advanced 3G countries such as Italy, Austria, Germany and the U.K., 3G network coverage is nowhere near the almost countrywide GSM coverage. This is different from the 1990’s where GSM coverage quickly came close coverage levels of the analog networks.

Roaming: In analog days, there was no roaming. With GSM, international roaming is a major benefit. Even in the future the majority of roamers will still have a GSM only phone. Switching off GSM networks makes no sense as revenue from roaming customers is substantial.

So what are we going to see in Europe by 2012 then?

In five years from now I expect the majority of subscribers in Europe to have a 3G compatible phone that is backwards compatible to 2G. In urban areas, operators might decide do downscale their GSM deployment a bit as most people now use the 3G instead of the 2G network for voice calls. Cities will still be covered by GSM but maybe with fewer number of available channels / bandwidth.

Such a scenario could come in combination with yet another equipment refresh which some operators require by then for both their 2G and 3G networks. At that time, base station equipment that integrates 2G, 3G and beyond 3G radios such as LTE could become very attractive. The motto of the hour could be "Replace your aging 2G and 3G equipment with a new base station that can do both plus LTE on top!"

I wonder if it is possible by then to only use one set of antennas for all three radio technologies!? If not, adding yet another set of antennas on top of an already crowded mast is not simple from both a technological and psychological point of view.

My PC Detects 13 Wifi Access Points In My Paris Apartment

Wifi_paris_13_networks_visible
Back in April 2006 I reported that the number of Wifi Access points I detected in my apartment in Paris has increased from two to six. Now about a year later the number has once more increased and is now at thirteen (see picture on left)! And that’s only in the 2.4 GHz ISM band! It’s great to see technology prospering but it starts to worry me a bit.

If only a fraction of the access points send more than their beacon frames then the networks on the same frequencies will start to trample over each other which impacts performance and reliability. For the moment things are still o.k. but some DSL operators such as Free now offer an HDTV TV box as part of their standard subscription which talks to the DSL modem via Wifi. So in the evening when lots of people watch TV some Wifi channels will become be quite busy.

Looks like I have to bring my Linksys OpenWRT 54 to Paris to trace which of these networks are really generating traffic.

ARPU Is Becoming Irrelevant

Once upon a time the wireless world was a happy and simple place for bean counters to put together their statistics. The Average Revenue Per User, or ARPU, was invented as a measure of how profitable and successful a network was operated and marketed. Back then, things were simple, one SIM card per user and only two services: Voice and SMS. In this environment, looking at the ARPU made sense. Today, however, the world looks much different and ARPU is quickly becoming an irrelevant key figure.

Use of several SIM cards

There are several reasons for this. First, people in many countries have started using several SIM cards because each SIM card offers an advantage the other doesn’t. The average revenue per user is now split between two SIM cards. Is the business less profitable because of this? Probably not, but the revenue of that user is now split over two SIM cards and that looks quite bad on the ARPU scale.

Same thing for business users: Many of them these days use a SIM card for their mobile phone and a second SIM card for the 3G data card that connects their notebooks to the Internet. The Average Revenue Per User should contain the sum of both. In practice that’s difficult to do because there is usually no way of knowing that both SIM cards belong to the same user, especially if the SIM cards were bought by a company.

Subsidies and Prepaid:

Second, MVNO’s (Mobile Virtual Network Operators) in some countries have started to offer cheap voice minutes but sell SIM cards without phones. So which ARPU is better, 30 euros a month generated with a contract which required a 300 euro subsidize for a cool phone which spread over  24 months reduces the real revenues achieved to €17.50, or 20 euros a month generated via a prepaid SIM without subsidies? Surely the €30.- ARPU looks nicer on the paper but the operator probably makes more money with the prepaid customer and a €20.- ARPU.

Wide Range of Services

Third, mobile networks offer a wide range of services today from voice calls to high speed Internet access. So which customer is more profitable for the operator?: A customer that spends 30 euros a month on voice calls or a customer that spends 30 euros a month for Internet access? In most cases the voice ARPU is probably more profitable than the data ARPU. However, prices for voice minutes keep falling and falling except in countries where there is no real competition among operators (n’est-ce pas? 🙂 So in the end the data customer could eventually become more profitable.

Alternatives

On the long run I guess ARPU has to be replaced by some other, more meaningful key figure adapted to the continuing changes. Maybe it would be a good idea have a range of key figures such as:

  • Average revenue for a voice minute, based on all voice minutes sold in the network over the period of a month.
  • Average revenue per megabyte for mobile services, i.e. web surfing and other Internet activities from mobile phones
  • Average revenue per megabyte achieved with high speed Internet access from notebooks
  • SMS and MMS should also be treated in the same manner.

I wonder if operators would be willing to go down that route!? In the end, these number would give a lot of insight… Also, compared to calculating the ARPU as done today, getting to these numbers would be a bit more difficult. However, if network operators have problems getting this information out of the call data records, they could ask Google or Yahoo to do it for them. They know how to process terabytes of information.

Alternatives, thoughts, anyone?

VoIP’s problem with Wifi

A couple of months ago I’ve been reporting about my experiences with the UTStarCom F1000G Wifi VoIP phone. It went back into the box basically because the software was too unstable. Another reason I didn’t like the phone at the time was that the Wifi reception of the phone was not very strong and voice quality suffered when only two walls were between the access point and the phone. At the time I thought this issue might be related to this type of phone only. Now one of my friends reports that he has the same problem with Nokia E-series phone he tried out. While stability was not the issue, voice quality degrades pretty quickly when moving away from the access point. He also came to the conclusion that the range is no match to those of DECT cordless phones. Looks like good Wifi antennas have not yet found their way into small form factor phones. However, I am afraid that’s a necessity to make VoIP over Wifi work.