The World of Mobile 5 Years Ago – October 2006

Only 5 years ago, the mobile domain was a radically different one. You don't believe it, it's just 5 year? Let me remind you: No iPhone, no Android (both only talked about in 2007). So what was going on then? Let's have a look at my blog entries from back then:

Mobile Virtual Network Operators: One can look at cheap, no frills mobile virtual network operators (MVNOs) from many different angles and come to good and bad results. Undisputed, however, that they had a big impact on the German market with prices tumbling significantly in only 18 months. What we take for granted today, going into a supermarket and buying a SIM card and perhaps a mobile phone or 3G dongle with it for a couple of Euros, it started back in 2005 and showed strong results in 2006. Other countries introduced MVNOs as well but under different circumstances with less competition. In France, for example, I am waiting up until today for the market getting a kick.

VoIP over Wi-Fi: Yes, no kidding, VoIP was already working back then, well, more or less…

EDGE: 5 years ago, I was musing on how long EDGE will be good enough for notebook connectivity. Now I can give the answer with tens of megabits of speed now offered by UMTS networks: It is not not good enough anymore 🙂 In that same post I was also wondering if we would see UMTS 900 in Europe in 5 years. And indeed, we do today, UMTS 900 is deployed in Finland, in the countryside in France in some places and in big cities in the UK such as London. Yes, UMTS 900, the idea was there already 5 years ago.

The Nokia N80: The predecessor to the iconic Nokia N95, one of the very first UMTS phones with Wi-Fi inside was unveiled at Nokia world in 2006. At the time, a risky move as network operators were probably not very happy about this. 5 years ago we were talking about 40 MB RAM, 200 MHz ARM processors and 3 megapixel cameras as the very top level in mobile. Today we are talking about mobile phones with 1 GB of RAM and dual-core 1.2 GHz ARM processors and dedicated video hardware encoding and decoding HD video and 12 megapixel images in real time. If you extrapolate this 5 years into the future, i.e. to 2016, we are talking of performance we have on desktops today. How that will be accomplished without draining the battery in half an hour or less remains to be seen. However, if you had told someone back in 2006 about phones with such specs, it would have been hard to imagine.

My killer-app from back then still not here now: Back in 2006 I was hoping that with the N95 I would get a feature that I could send my location to someone instantly instead of explaining where I am for 5 minutes in a call. From a technical point of view it has been working for many years now. And I had opportunities enough to use it. But either the person on the other uses a different maps application, has no smartphone, etc. etc. The lack of a standardized solution for this and use by a critical mass of users is still preventing things. Perhaps in another 5 years?…

WiMAX was a hot topic: Well, pretty much forgotten by now.

Video Calling: I've been using it occasionally over the past years but it hasn't taken up due to a number of reasons. Now Apple is giving it another try.

Wi-Fi 802.11n: What's currently in all high end Wi-Fi devices was finalized as a standard just around back in 2006. Despite the time that has passed I still see interoperability issues in the wild with devices with a chipset of one manufacturer not working well with devices that have a chipset of another manufacturer in sight. Not ideal.

I guess you agree, lots has happened in the past 5 years…

Ubuntu 3G Connection Sharing – Think Reverse

Ubuntu-ics Here's a thing that I stumbled over recently: I always assumed Ubuntu does not have an Internet Connection Sharing (ICS) functionality because there was no such option in the network manager settings in the "Mobile Broadband" section. But this was an error of thinking because I was assuming ICS would work the same way on Ubuntu as it does on Windows. As it turns out, however, configuring ICS works exactly the other way round.

In Windows (XP, Vista, 7) the ICS option has to be activated on the Interface that offers Internet connectivity (e.g. the 3G link) and here, the interface has to be selected on which the sharing computers are on (e.g. the Ethernet port).

In Ubuntu however, you have to set the "sharing" option on the interface where the sharing computers are (e.g. the Ethernet port) and NOT on the interface with the Internet connectivity. This is why there is no option in the "Mobile Broadband" section. The Ubuntu way of doing things has to significant advantages:

  • There is no need define which interface shares to which other interface.
  • Defining the Ethernet port as the port where the shared computers are connected allows Ubuntu (or rather iptables) to select any interface that has Internet connectivity to act as the Internet port. Even switching from 3G to Wi-Fi (both having Internet connectivity) is seamless to all computers connected to the Ethernet port (except for TCP and UDP connections being reset during the switchover).

Pretty neat!

Operator Patchy-ness Vs. Monoliths

Recently wondered what might be better for the mobile industry, the operator patchy-ness you find in Europe, Asia and Africa or the network operator monoliths found, for example, in the US? From a software development point of view things are probably easier for device manufacturers in the US. You sell your device to one network operator, you do your software once, you do your hardware once, you do everything once and you are done. Sounds good but this creates also a great dependency. If you suddenly fall out of grace of a network operator you've been doing devices for you loose market share on a whole continent instantly. Also, despite covering a whole continent, US operators use frequency bands incompatible with most networks on the rest of the planet. That leaves them vulnerable when it comes to volume. 

In Europe and the rest of the world things are very different. A manufacturer can sell a mobile device, identical from a hardware point of view, to many different network operators. If one doesn't like you, the world does not come to an end, you can still sell to the national competitor and each country is a whole new game. Obviously, that gives more power to the manufacturer. On the other hand, it's also more work due to all the different languages and apps each country and network operator requires on the devices he sells. Many network operators have networks in different countries now, which might make things a bit more simple for both sides and gives network operators a bit more power than those have that operate only in one country.

But despite there being many network operators they all have one thing in common: They all use the same technology and the same frequency bands. From a hardware point of view, that's a huge advantage for device manufacturers, they can concentrate on one variant of the device for all network operators. Very different in the US with it's mixture of GSM, CDMA and LTE coupled to both existing GSM and LTE networks.

Perhaps the huge number of countries and network operators throughout the European continent has had one good thing: Unlike in the US where operators were and mostly still are of the opinion that they can walk it alone, there is no such thing in Europe. Everyone knows that compromise is necessary to a common technology. Those not sticking to a consensus will have a difficult time to go it alone. To me it looks like this has helped tremendously to mold the industry together. What do you think?

The Empty Phone Booth

Empty-booth Even out in the Californian desert, the good old telephone booth is in it's way to extinction. I recently took the picture on the left at the same place from which I reported in a previous post on remote area 3G coverage. So 3G is killing the phone booth if you will. Not that I am nostalgic about it but this is one quite recent innovation that has come, had its prime and has gone again.

Interesting side note: Have a look at the logo at the top of the booth. According to Wikipedia it is the old Pacific Telephone logo, used until 1983: "In 1969, AT&T revamped its corporate identity, resulting in a simplified Bell logo, absent of "Bell System". This logo remained with Pacific Telephone until 1983". That gives an interesting indication as to when that booth was put there.

The UK Delays LTE Auctions Again – To End of 2012

Many countries have already auctioned their LTE spectrum years ago and counties such as Sweden, Norway, Finland and Germany in Europe have LTE networks on air already, the earlier ones since 2009. But for reasons which are hard to understand when looking at things from this point of view, the UK keeps delaying their spectrum auctions to the end of 2012 according to this news piece on MocoNews and the Ofcom announcement on their page here. So even if the auction now takes place at the end of 2012 it will be well into 2013 before first networks open up even only in a few places.

And while the UK network operators are testing LTE with a couple of sites in a few places, there are well over 2000 base stations already deployed and operating in Germany, serving private customers and, perhaps even more important, companies who have so far been left behind. Also large cities already have life LTE coverage as well, off-loading traffic from UMTS networks.

Waiting until the end of 2012 with the auction, that is four years after the first networks have launched in Europe! Back in the 1980's the UK was on the forefront of telecommunication development and had one of the most flourishing telecoms landscapes in the world. What has remained of this is now completely lost with this move and I am baffled. Whatever the issues are one must wonder why those were overcome in other countries in Europe which have now passed the UK by almost half a decade? Is Ofcom trying to make everyone happy? I would argue that that's an impossible mission. Whatever they decide, someone will be unhappy and go to court. Another year more is unlikely to make a difference to that.

Yes, I know, the current spectrum allocation in the UK is different from those in other countries. But are they the same in any two countries? In the UK, only two network operators have been assigned 900 MHz spectrum, a lease that was just recently confirmed, extended and opened up for UMTS. In Germany, the setup is different. Here, the 900 MHz band is assigned to all four network operators, although the shares are of different size, giving some more flexibility than others. But don't think this has made the auctioning process any easier with quite a number of companies going to court before the auction to stop it for various reasons. In the end there is and will be no single setup for the auction that makes everyone happy and is perceived by everyone as fair.

Providing Internet Connectivity For Meetings – Do’s and Don’ts – Part 2

In the previous post I've described the problems frequently encountered with Internet access provided at international meetings with more than just a few participants. One way to ensure good connectivity as a host is obviously talking to the ISP or conference venue manager and requesting feedback to questions along the lines of my previous post. The other alternative is of course to do it yourself, provided you have the necessary expertise, time for preparation and time during the meeting to keep things going and of course the necessary backhaul capacity available at the meeting location.  As I had all these things available for a recent meeting of 40 people I hosted, I decided to do it myself. And here are the details:

The Wi-Fi

To connect the participants notebook, smartphones, pads, etc. I used two Wi-Fi access points, one occupying a standard 20 MHz 802.11b/g channel in the 2.4 GHz band and, for additional capacity, a 40 MHz 802.11n channel in the 5 GHz band. Forget about using off-the self Wi-Fi access points for home use. Many of those will only allow 20 or so devices on the network before refusing further service. Also, their user interface gives little insight how well it is running, if there are performance issues and if so because of what (see previous post). With 40 people on your back you don't want to fly this blind.

For ultimate configurability, manageability and reliability I decided to use a Linksys WRT-54GL 802.11b/g router. This device has been on the market for many years and has, compared to recent devices, a quite low powered CPU and little memory. Also the Wi-Fi chip features only few enhanced features. For this application, this is rather and advantage as less supported extra features on the air interface means fewer interoperability issues.

Also, the WRT-54GL's default OS can be replaced with an open source project such as Open-WRT and DD-WRT which offers the required flexibility and insight into how well the network is running. In the past, I've used Open-WRT for various experiments and this time chose to go for DD-WRT. No particular reason, I just wanted to try something else.

In addition to the Wi-Fi interface, I used the WRT-54GL as an IP router and for network address translation (NAT). The WAN interface to the router was connected to the backhaul as described in a minute, with a single configured IP address. On the LAN and Wi-Fi part of the network, the router acted as a DHCP server and supplied IP addresses to devices requesting it over Wi-Fi or Ethernet. That makes one somewhat independent from the DHCP capabilities provided by the backhaul part of the network.

The DHCP Sever

For 5 GHz Wi-Fi connectivity I used a second Wi-Fi access point on which the DHCP server was deactivated, thus acting only as a Wi-Fi to Ethernet bridge to the WRT-54GL, which did the rest of the work. One limit of DD-WRT is that the user interface only allows to configure a /24 subnet for the DHCP server, limiting the number of theoretical simultaneous users to around 250. Not an issue for my meeting but it is not an order of magnitude away from my requirements and thus has to be kept in mind in case the setup is to be scaled up for larger meetings in the future.

The Backhaul and Amount of Data Transferred

As said in the previous post I was a bit daring and used a cellular broadband network as a backhaul. In terms of raw downlink and uplink speed, enough capacity was available, with speeds of beyond double digit MBit/s reached at any time. Your mileage may vary, however, depending on what kind of network you use, how fast the backhaul to the base station you are using is, how good reception conditions are at your meeting, etc. Often, meetings are deep indoors or often underground. Under such circumstances, forget about using cellular. In some cases you are lucky and have dedicated indoor coverage with repeaters, something not uncommon in hotels or meeting venues in Europe. There is no way around a site survey before the meeting to evaluate signal conditions and data transfer rates available at that location. If you don't do it, you are likely in for a very bad surprise. Another thing that can be a bit tricky to find out is if the network operator can handle the number of simultaneous VPN connectiosn required by the meeting participants over a single cellular connection. For me it was the case and I had about 40 VPNs running in parallel from my network into the Internet over a single cellular connection. There is no guarantee, however, that this will work with all network operators (fixed or mobile).

The next thing that needs to be considered is the amount of data that is transferred. The 40 participants in my meeting transferred around 4 GByte of data per day. That's about 100 MB per day per participant. Considering that most VPNs already generate several megabytes worth of data per hour just for keep-alive signaling that value is not very high. In other meetings, depending on the kind of applications use, much more or much less data might be transferred, it's difficult to predict. In other words, don't plan to run the meeting with a subscription that starts to throttle the connection after a gigabyte or two.

Windows Internet Connection Sharing

As my Linksys router is quite old it doesn't have a USB port for the Internet dongle. My second choice for cellular connectivity was therefore a notebook running Windows Visa and Internet Connection Sharing (ICS) between the dongle and the Ethernet port. (Windows ICS has come in handy many times since I first wrote about it back in 2006)

The WRT-54 was then connected via an Ethernet cable to the Windows notebook which routed the packets, via another network address translation, over the USB dongle to the Internet. Yes, Windows and double NATing, far from being the ideal solution. The reason for the double-NATing was that I had some issues with the Windows ICS DHCP implementation assingning IP addresses reliably to client devices. Therefore I decided to use NATing on the WRT-54 as well to shield client devices from Microsoft's strange implementation.

The Windows NAT was also my single blind spot, as I am not aware of the maximum number of simultaneous NAT translations the OS will do. Therefore having a NAT with statistics on the WRT-54 at least would have provided me with numbers as to how far it went in case it failed. Fortunately, it didn't.

Maximum number of NAT translations and Processor Power

On the Linksys router the number of simultaneous NAT translations can be configured and the highest value, due to the low amount of RAM is 4096. In practice around 400 – 1200 translations were required simultaneously at any one time, well below the theoretical limit. However, again not an order of a magnitude away from real life use so something to be kept in mind. Performing network address translation requires the router to look into every IP packet and changing the header. In other words, the processor and RAM get quite utilized. The Linksys WRT-54GL with its 200 MHz processor was doing the job perfectly with packet round trip times never going up no matter what the current data rate and number of translations in use where. A good indication that things were going well. I could notice, however, that the processor was pretty much running at full capacity so I am not sure how far I was away from processing power having become a bottleneck. Perhaps not much. So for meetings with more participants I would go for a more powerful device in the future. More about that below.

Bottlenecks Elsewhere

Throughput-meeting The picture on the left (click on the picture to enlarge) shows around 30 minutes of network traffic during the meeting. As you can be seen, the maximum throughput peak was at around 13 MBit/s, by far below of what the backhaul link would have been able to provide. The bottleneck was somewhere else however. My private VPN tunnel is limited to around 8 MBit/s and that of my employer to about 2 MBit/s. From that bandwidth usage graph I take it that other companies have similar restrictions. Also, most web servers and file servers won't send files and web pages down the pipe with more than a few MBit/s per user. As the graph shows a duration of around 30 minutes individual web page downloads only register as narrow peaks. The big red areas are large file downloads, several tens of megabytes at a time. All of the blue area in the graph is unused backhaul capacity.

Summary

Altogether, the setup described above provided ample capacity and very high instantaneous data rates to every user in the network throughout the meeting days. From an application usage point of view the ultimate stress test was a webex screen sharing session of around 20 computers in the room via a webex server on the Internet. Backhaul capacity required for this wasn't very high compared to what was available but the sheer number of packets flowing when the screen owner modified text and pictures was probably at its peak during that time.

And, most important, connectivity to the cellular network didn't drop a single time during the three meeting days. Quite an achievement of the cellular network and devices used locally. Even with the best preparations, there are some unknowns that could make things pretty rough during the meeting. Therefore my last advice in this post is to always have a backup plan available. Fortunately, I didn't need mine.

Providing Internet Connectivity For Meetings – Do’s and Don’ts

One of the major pain points when attending international meetings with more than just a few attendees these days is more often than not the lack of proper Internet access at the meeting venue. When those meetings take place in hotels, some organizers just rely on the hotel's Internet connectivity, which is often unable to provide proper service to more than a few people simultaneously. Often, connections are slow and sometimes the whole network becomes completely unusable as its breaking down once a critical number of simultaneous connections is reached.

Why Connectivity?

Some people might think connectivity during meetings and conferences is a luxury. That's not the case, however, as at least during the meetings I attend, participants require access to information in their home network, they need to communicate with other people during the meeting via email, instant messaging, etc. to get advice, use cloud infrastructure to exchange documents in real time with people on-site and off-site, they use connectivity for multi-user screen sharing, etc. etc.

Doing it On My Own With Cellular Backhaul

When recently hosting a meeting with 40 participants myself, I decided to not only rely on an external party to provide adequate Internet connectivity but to gain some experience myself. I therefore set up my own local Wi-Fi network and provided backhaul capacity over a high speed broadband cellular network in Germany. Perhaps a somewhat risky plan but with achievable cellular speeds in the double digit MBit/s range I thought it was worth a try. I spent many evenings putting the required kit together and trying out things as before as during the meeting there is hardly any time for experiments. It either works out of the box or you have a problem. To my delight the network I provided worked exceptionally well with participants having been very happy about the overall experience. Understandably, I was very pleased not only about this but also about the experience and insight I gained. The following is an account of the thoughts that went into this, the equipment used and the lessons learnt in case you might plan a similar thing in the future.

Getting the Wi-Fi right

One of the first bottlenecks frequently encountered is that there is often only a single Wi-Fi channel available at a meeting site. While in theory even an 802.11g channel has a capacity of 54 MBit/s in practice, this is reduced to just about 22 MBit/s in the best case, which is a single device communicating with an access point only a few meters away. With several dozen devices not in close proximity communicating with the access point simultaneously this value further drops to a much lower value.

The solution to this is to use several Wi-Fi several access points even in a single room and assign them different frequency channels and SSIDs. It's also a good idea to use the 5 GHz band in addition as there is little interference on that band today. Not all devices support this band and if that access point uses a separate and "interesting" SSID such as "fastest channel" those participants who can see it will likely use it, thus enjoying superior speeds and at the same time removing their load from the 2.4 GHz channels.  

Another important thing is to make sure that in case the meeting has dedicated Wi-Fi resources, the channels used do not partly or fully overlap with other Wi-Fi access points in the area if at all possible. On the 2.4 GHz band, Wi-Fi's need to be 4 channels apart from each other. There are tools such as "Wlaninfo" on Windows or a nice command in Linux such as "sudo iw dev wlan0 scan | grep -w 'channel|SSID|signal'". Things still work if the channels overlap or are stacked on top of each other but throughput is reduced to some degree depending on how much load there is on the other access point.

The first thing that usually goes on the local Wi-Fi is often not the notebooks but the smartphones of the participants. Roaming charges remain expensive and the local Wi-Fi is a welcome relief. But despite the devices being small and the amount of data transmitted being perhaps low, the number of UDP / TCP connections that need to be translated by the NAT is by no means lower than that of a notebook. So beware, a smartphone counts as a full device and on average, 1.5 devices should be put in the calculation per participant.

Which brings me to Wi-Fi compatability. Most devices today are tested for Wi-Fi compatability. In practice, however, I've seen it more than once that an 802.11n capable smartphone has brought an 802.11n access point to its knees due to some sort of incompatability. In other words, all other users of that channel suddenly can't communicate anymore either. For the moment, the best solution to this is to switch off 802.11n support in the access point or only use 11n in the 5 GHz band. This way, only one of the channels is impacted and most people will not notice or switch to another channel once things start to go wrong. 

Local Networks and Hords of Devices

Another issue I have encountered is that some local networks run out of IP addresses because their DHCP server is limited to just a few dozen IP addresses in the first place or becasue they only use a single /24 subnet with 256 addresses for the whole hotel or meeting complex. Even if there are fewer people there at any one time, the DHCP server might run out of IP addresses as these are usually assigned for a full day and even when devices sign-off the IP addresses remain assigned until the lease runs out.

And yet another issue that seems to make life difficult at meetings with many participants is that the local infrastructure is unable to cope with the massive number of simultaneous TCP and UDP connections the local backhaul router has to perform Network Address Translation (NAT) on. This is necessary as local networks usually assign local IP addresses and only use a single global IP address for all participants. That can work quite o.k. but the local equipment needs to be able to handle the massive number of simultaneous connections. In business meetings one thing comes to help: Most participants use VPN tunnels which hide the TCP and UDP connections to the company network or even all connections of the device from the local network. Thus, instead of dozends or hundres of translations, the local NAT only has to handle a few for such a device.

How Much Backhaul Bandwidth

And finally, one of the crucial things is how much backhaul bandwidth to the Internet is available. For a meeting of 50 to 100 participants, 10 MBit/s in the downlink direction and at least 3-5 MBit/s in the uplink direction is a good ballpark number. Any less and congestion is likely to occur. More is better, of course…

What To Ask the Hotel or Internet Provider

In case you plan to use the Internet connectivity provided locally, ask the hotel or Internet provider about the things mentioned above. If they can't come up with good answers or just make general promisses, chances are high it won't work well in the end. And one more important thing: On-site support is vital as in case things go wrong as meetings can go sour instantly when connectivity has broken down.

Part 2 – How To Do It Yourself

The above should already be pretty good advice in case you decide to provide your own network infrastructure and backhaul. In the second part, I'll go into the details into the equipment and software I used and the experiences gained with it.

Text For Your Car

Text-your-car Here's a cool yet very simple SMS based service I recently tried, "Text For Your Car": In a hotel I recently stayed I used valet parking after searching for a parking space myself in the neighborhood without success for a while. While handing over the car, I got the card on the left with a mobile number on it I could send my parking-id to when I wanted my car to be retrieved and waiting for me when leaving the hotel. I gave it a try and within seconds, I got a confirmation back that the car would be retrieved. Sure, one could have also called ahead but sometimes I just prefer text based communication.

AT&T In the Wilderness

Highway2 While I admittedly don't usually have many good things to report when it comes cellular network coverage, capacity and availability in the U.S. I have a real praise for a change this time. With some time to spare after a week of meetings I drove out from San Diego in the direction of Palm Spings, preferring the small roads and enjoying the countryside. When stopping for lunch at one of those traditional family run highway restaurants a dozens miles away from the next town I was surprised that I could find good 3G coverage from AT&T inside.

Highway1 Even more surprising was that I could get connected with both my penta-band UMTS Nokia N8 as well as with my tri-band 3G USB stick. In other words, AT&T has deployed UMTS 1900, despite the very low population density. If using the 1900 MHz band for 3G was done on purpose or not is another question but it has served me well while being there. Here's a link if you want to see the location for yourself. The pictures on the left show a typical cellular installation along the highway of which I've seen quite a few. It's an Ericsson mini-RBS, probably 2G only. And it must have been there for a while already as there was still a Cingluar logo on it next to the current AT&T globe.

Verizon Open Access vs. Real Openness

Whenever I read about Verizon and their open access program I have to wonder how many more intepretations or uses of the term "openness" there could possibly be!?

Let's look at Europe for a minute. Ever since the type approval scheme for mobile devcies came to an end in the later part of the 1990's anyone could bring GSM and later UMTS mobile devices on the market and they could be used in all of the networks independently from the blessing of any network operator. SIM cards and devices are separated since back in the 1980's and I am only a SIM card away from using any device in any network. I would call this "openness".

Has it harmed the industry and the networks in the past decade? Quite the contrary I would argue and real live shows the result of this policy. The ecosystem is flourishing, networks are (mostly) properly built and maintained and healthy competition has prices (mostly) on an affordable level.

Perhaps the picture is painted a bit too rosy and and doubtlessly, things could still be improved. But compare that to the so called "open access" or "open development" program of Verizon where you still have to go through their lab and get their blessing before you can sell the device. On top of that, once you have an EVDO or LTE device that works on Verizon's particular (LTE) band you have to go back to them again if you want to sell your device to customers. I'd call this a "pretty firm grip". In other words, the devices have to work in a monopoly situation and the monopolist still has to give blessing before you can use their "open" network. Is this openness?

But perhaps I am missing something here? If so, please enlighten me with a comment.