Providing Internet Connectivity For Meetings – Do’s and Don’ts – Part 2

In the previous post I've described the problems frequently encountered with Internet access provided at international meetings with more than just a few participants. One way to ensure good connectivity as a host is obviously talking to the ISP or conference venue manager and requesting feedback to questions along the lines of my previous post. The other alternative is of course to do it yourself, provided you have the necessary expertise, time for preparation and time during the meeting to keep things going and of course the necessary backhaul capacity available at the meeting location.  As I had all these things available for a recent meeting of 40 people I hosted, I decided to do it myself. And here are the details:

The Wi-Fi

To connect the participants notebook, smartphones, pads, etc. I used two Wi-Fi access points, one occupying a standard 20 MHz 802.11b/g channel in the 2.4 GHz band and, for additional capacity, a 40 MHz 802.11n channel in the 5 GHz band. Forget about using off-the self Wi-Fi access points for home use. Many of those will only allow 20 or so devices on the network before refusing further service. Also, their user interface gives little insight how well it is running, if there are performance issues and if so because of what (see previous post). With 40 people on your back you don't want to fly this blind.

For ultimate configurability, manageability and reliability I decided to use a Linksys WRT-54GL 802.11b/g router. This device has been on the market for many years and has, compared to recent devices, a quite low powered CPU and little memory. Also the Wi-Fi chip features only few enhanced features. For this application, this is rather and advantage as less supported extra features on the air interface means fewer interoperability issues.

Also, the WRT-54GL's default OS can be replaced with an open source project such as Open-WRT and DD-WRT which offers the required flexibility and insight into how well the network is running. In the past, I've used Open-WRT for various experiments and this time chose to go for DD-WRT. No particular reason, I just wanted to try something else.

In addition to the Wi-Fi interface, I used the WRT-54GL as an IP router and for network address translation (NAT). The WAN interface to the router was connected to the backhaul as described in a minute, with a single configured IP address. On the LAN and Wi-Fi part of the network, the router acted as a DHCP server and supplied IP addresses to devices requesting it over Wi-Fi or Ethernet. That makes one somewhat independent from the DHCP capabilities provided by the backhaul part of the network.

The DHCP Sever

For 5 GHz Wi-Fi connectivity I used a second Wi-Fi access point on which the DHCP server was deactivated, thus acting only as a Wi-Fi to Ethernet bridge to the WRT-54GL, which did the rest of the work. One limit of DD-WRT is that the user interface only allows to configure a /24 subnet for the DHCP server, limiting the number of theoretical simultaneous users to around 250. Not an issue for my meeting but it is not an order of magnitude away from my requirements and thus has to be kept in mind in case the setup is to be scaled up for larger meetings in the future.

The Backhaul and Amount of Data Transferred

As said in the previous post I was a bit daring and used a cellular broadband network as a backhaul. In terms of raw downlink and uplink speed, enough capacity was available, with speeds of beyond double digit MBit/s reached at any time. Your mileage may vary, however, depending on what kind of network you use, how fast the backhaul to the base station you are using is, how good reception conditions are at your meeting, etc. Often, meetings are deep indoors or often underground. Under such circumstances, forget about using cellular. In some cases you are lucky and have dedicated indoor coverage with repeaters, something not uncommon in hotels or meeting venues in Europe. There is no way around a site survey before the meeting to evaluate signal conditions and data transfer rates available at that location. If you don't do it, you are likely in for a very bad surprise. Another thing that can be a bit tricky to find out is if the network operator can handle the number of simultaneous VPN connectiosn required by the meeting participants over a single cellular connection. For me it was the case and I had about 40 VPNs running in parallel from my network into the Internet over a single cellular connection. There is no guarantee, however, that this will work with all network operators (fixed or mobile).

The next thing that needs to be considered is the amount of data that is transferred. The 40 participants in my meeting transferred around 4 GByte of data per day. That's about 100 MB per day per participant. Considering that most VPNs already generate several megabytes worth of data per hour just for keep-alive signaling that value is not very high. In other meetings, depending on the kind of applications use, much more or much less data might be transferred, it's difficult to predict. In other words, don't plan to run the meeting with a subscription that starts to throttle the connection after a gigabyte or two.

Windows Internet Connection Sharing

As my Linksys router is quite old it doesn't have a USB port for the Internet dongle. My second choice for cellular connectivity was therefore a notebook running Windows Visa and Internet Connection Sharing (ICS) between the dongle and the Ethernet port. (Windows ICS has come in handy many times since I first wrote about it back in 2006)

The WRT-54 was then connected via an Ethernet cable to the Windows notebook which routed the packets, via another network address translation, over the USB dongle to the Internet. Yes, Windows and double NATing, far from being the ideal solution. The reason for the double-NATing was that I had some issues with the Windows ICS DHCP implementation assingning IP addresses reliably to client devices. Therefore I decided to use NATing on the WRT-54 as well to shield client devices from Microsoft's strange implementation.

The Windows NAT was also my single blind spot, as I am not aware of the maximum number of simultaneous NAT translations the OS will do. Therefore having a NAT with statistics on the WRT-54 at least would have provided me with numbers as to how far it went in case it failed. Fortunately, it didn't.

Maximum number of NAT translations and Processor Power

On the Linksys router the number of simultaneous NAT translations can be configured and the highest value, due to the low amount of RAM is 4096. In practice around 400 – 1200 translations were required simultaneously at any one time, well below the theoretical limit. However, again not an order of a magnitude away from real life use so something to be kept in mind. Performing network address translation requires the router to look into every IP packet and changing the header. In other words, the processor and RAM get quite utilized. The Linksys WRT-54GL with its 200 MHz processor was doing the job perfectly with packet round trip times never going up no matter what the current data rate and number of translations in use where. A good indication that things were going well. I could notice, however, that the processor was pretty much running at full capacity so I am not sure how far I was away from processing power having become a bottleneck. Perhaps not much. So for meetings with more participants I would go for a more powerful device in the future. More about that below.

Bottlenecks Elsewhere

Throughput-meeting The picture on the left (click on the picture to enlarge) shows around 30 minutes of network traffic during the meeting. As you can be seen, the maximum throughput peak was at around 13 MBit/s, by far below of what the backhaul link would have been able to provide. The bottleneck was somewhere else however. My private VPN tunnel is limited to around 8 MBit/s and that of my employer to about 2 MBit/s. From that bandwidth usage graph I take it that other companies have similar restrictions. Also, most web servers and file servers won't send files and web pages down the pipe with more than a few MBit/s per user. As the graph shows a duration of around 30 minutes individual web page downloads only register as narrow peaks. The big red areas are large file downloads, several tens of megabytes at a time. All of the blue area in the graph is unused backhaul capacity.

Summary

Altogether, the setup described above provided ample capacity and very high instantaneous data rates to every user in the network throughout the meeting days. From an application usage point of view the ultimate stress test was a webex screen sharing session of around 20 computers in the room via a webex server on the Internet. Backhaul capacity required for this wasn't very high compared to what was available but the sheer number of packets flowing when the screen owner modified text and pictures was probably at its peak during that time.

And, most important, connectivity to the cellular network didn't drop a single time during the three meeting days. Quite an achievement of the cellular network and devices used locally. Even with the best preparations, there are some unknowns that could make things pretty rough during the meeting. Therefore my last advice in this post is to always have a backup plan available. Fortunately, I didn't need mine.

Providing Internet Connectivity For Meetings – Do’s and Don’ts

One of the major pain points when attending international meetings with more than just a few attendees these days is more often than not the lack of proper Internet access at the meeting venue. When those meetings take place in hotels, some organizers just rely on the hotel's Internet connectivity, which is often unable to provide proper service to more than a few people simultaneously. Often, connections are slow and sometimes the whole network becomes completely unusable as its breaking down once a critical number of simultaneous connections is reached.

Why Connectivity?

Some people might think connectivity during meetings and conferences is a luxury. That's not the case, however, as at least during the meetings I attend, participants require access to information in their home network, they need to communicate with other people during the meeting via email, instant messaging, etc. to get advice, use cloud infrastructure to exchange documents in real time with people on-site and off-site, they use connectivity for multi-user screen sharing, etc. etc.

Doing it On My Own With Cellular Backhaul

When recently hosting a meeting with 40 participants myself, I decided to not only rely on an external party to provide adequate Internet connectivity but to gain some experience myself. I therefore set up my own local Wi-Fi network and provided backhaul capacity over a high speed broadband cellular network in Germany. Perhaps a somewhat risky plan but with achievable cellular speeds in the double digit MBit/s range I thought it was worth a try. I spent many evenings putting the required kit together and trying out things as before as during the meeting there is hardly any time for experiments. It either works out of the box or you have a problem. To my delight the network I provided worked exceptionally well with participants having been very happy about the overall experience. Understandably, I was very pleased not only about this but also about the experience and insight I gained. The following is an account of the thoughts that went into this, the equipment used and the lessons learnt in case you might plan a similar thing in the future.

Getting the Wi-Fi right

One of the first bottlenecks frequently encountered is that there is often only a single Wi-Fi channel available at a meeting site. While in theory even an 802.11g channel has a capacity of 54 MBit/s in practice, this is reduced to just about 22 MBit/s in the best case, which is a single device communicating with an access point only a few meters away. With several dozen devices not in close proximity communicating with the access point simultaneously this value further drops to a much lower value.

The solution to this is to use several Wi-Fi several access points even in a single room and assign them different frequency channels and SSIDs. It's also a good idea to use the 5 GHz band in addition as there is little interference on that band today. Not all devices support this band and if that access point uses a separate and "interesting" SSID such as "fastest channel" those participants who can see it will likely use it, thus enjoying superior speeds and at the same time removing their load from the 2.4 GHz channels.  

Another important thing is to make sure that in case the meeting has dedicated Wi-Fi resources, the channels used do not partly or fully overlap with other Wi-Fi access points in the area if at all possible. On the 2.4 GHz band, Wi-Fi's need to be 4 channels apart from each other. There are tools such as "Wlaninfo" on Windows or a nice command in Linux such as "sudo iw dev wlan0 scan | grep -w 'channel|SSID|signal'". Things still work if the channels overlap or are stacked on top of each other but throughput is reduced to some degree depending on how much load there is on the other access point.

The first thing that usually goes on the local Wi-Fi is often not the notebooks but the smartphones of the participants. Roaming charges remain expensive and the local Wi-Fi is a welcome relief. But despite the devices being small and the amount of data transmitted being perhaps low, the number of UDP / TCP connections that need to be translated by the NAT is by no means lower than that of a notebook. So beware, a smartphone counts as a full device and on average, 1.5 devices should be put in the calculation per participant.

Which brings me to Wi-Fi compatability. Most devices today are tested for Wi-Fi compatability. In practice, however, I've seen it more than once that an 802.11n capable smartphone has brought an 802.11n access point to its knees due to some sort of incompatability. In other words, all other users of that channel suddenly can't communicate anymore either. For the moment, the best solution to this is to switch off 802.11n support in the access point or only use 11n in the 5 GHz band. This way, only one of the channels is impacted and most people will not notice or switch to another channel once things start to go wrong. 

Local Networks and Hords of Devices

Another issue I have encountered is that some local networks run out of IP addresses because their DHCP server is limited to just a few dozen IP addresses in the first place or becasue they only use a single /24 subnet with 256 addresses for the whole hotel or meeting complex. Even if there are fewer people there at any one time, the DHCP server might run out of IP addresses as these are usually assigned for a full day and even when devices sign-off the IP addresses remain assigned until the lease runs out.

And yet another issue that seems to make life difficult at meetings with many participants is that the local infrastructure is unable to cope with the massive number of simultaneous TCP and UDP connections the local backhaul router has to perform Network Address Translation (NAT) on. This is necessary as local networks usually assign local IP addresses and only use a single global IP address for all participants. That can work quite o.k. but the local equipment needs to be able to handle the massive number of simultaneous connections. In business meetings one thing comes to help: Most participants use VPN tunnels which hide the TCP and UDP connections to the company network or even all connections of the device from the local network. Thus, instead of dozends or hundres of translations, the local NAT only has to handle a few for such a device.

How Much Backhaul Bandwidth

And finally, one of the crucial things is how much backhaul bandwidth to the Internet is available. For a meeting of 50 to 100 participants, 10 MBit/s in the downlink direction and at least 3-5 MBit/s in the uplink direction is a good ballpark number. Any less and congestion is likely to occur. More is better, of course…

What To Ask the Hotel or Internet Provider

In case you plan to use the Internet connectivity provided locally, ask the hotel or Internet provider about the things mentioned above. If they can't come up with good answers or just make general promisses, chances are high it won't work well in the end. And one more important thing: On-site support is vital as in case things go wrong as meetings can go sour instantly when connectivity has broken down.

Part 2 – How To Do It Yourself

The above should already be pretty good advice in case you decide to provide your own network infrastructure and backhaul. In the second part, I'll go into the details into the equipment and software I used and the experiences gained with it.

Text For Your Car

Text-your-car Here's a cool yet very simple SMS based service I recently tried, "Text For Your Car": In a hotel I recently stayed I used valet parking after searching for a parking space myself in the neighborhood without success for a while. While handing over the car, I got the card on the left with a mobile number on it I could send my parking-id to when I wanted my car to be retrieved and waiting for me when leaving the hotel. I gave it a try and within seconds, I got a confirmation back that the car would be retrieved. Sure, one could have also called ahead but sometimes I just prefer text based communication.

AT&T In the Wilderness

Highway2 While I admittedly don't usually have many good things to report when it comes cellular network coverage, capacity and availability in the U.S. I have a real praise for a change this time. With some time to spare after a week of meetings I drove out from San Diego in the direction of Palm Spings, preferring the small roads and enjoying the countryside. When stopping for lunch at one of those traditional family run highway restaurants a dozens miles away from the next town I was surprised that I could find good 3G coverage from AT&T inside.

Highway1 Even more surprising was that I could get connected with both my penta-band UMTS Nokia N8 as well as with my tri-band 3G USB stick. In other words, AT&T has deployed UMTS 1900, despite the very low population density. If using the 1900 MHz band for 3G was done on purpose or not is another question but it has served me well while being there. Here's a link if you want to see the location for yourself. The pictures on the left show a typical cellular installation along the highway of which I've seen quite a few. It's an Ericsson mini-RBS, probably 2G only. And it must have been there for a while already as there was still a Cingluar logo on it next to the current AT&T globe.

Verizon Open Access vs. Real Openness

Whenever I read about Verizon and their open access program I have to wonder how many more intepretations or uses of the term "openness" there could possibly be!?

Let's look at Europe for a minute. Ever since the type approval scheme for mobile devcies came to an end in the later part of the 1990's anyone could bring GSM and later UMTS mobile devices on the market and they could be used in all of the networks independently from the blessing of any network operator. SIM cards and devices are separated since back in the 1980's and I am only a SIM card away from using any device in any network. I would call this "openness".

Has it harmed the industry and the networks in the past decade? Quite the contrary I would argue and real live shows the result of this policy. The ecosystem is flourishing, networks are (mostly) properly built and maintained and healthy competition has prices (mostly) on an affordable level.

Perhaps the picture is painted a bit too rosy and and doubtlessly, things could still be improved. But compare that to the so called "open access" or "open development" program of Verizon where you still have to go through their lab and get their blessing before you can sell the device. On top of that, once you have an EVDO or LTE device that works on Verizon's particular (LTE) band you have to go back to them again if you want to sell your device to customers. I'd call this a "pretty firm grip". In other words, the devices have to work in a monopoly situation and the monopolist still has to give blessing before you can use their "open" network. Is this openness?

But perhaps I am missing something here? If so, please enlighten me with a comment.

Australia, Telstra and Double Layer UMTS

In quite a number of countries, UMTS is used on more than one frequency band. In the US, AT&T, for example, has UMTS deployed in both the 850 and 1900 MHz band. And when I am in the US I can experience the difference quite often. My Nokia N8 is a penta band UMTS device while my 3G dongle is triple band and only supports the 1900 MHz band in the US. So quite often, when I am indoor I can still get reasonable 3G coverage with my N8 over the 850 MHz band while my 3G dongle finds nothing anymore.

In Europe, O2 in the UK has deployed UMTS in the standard 2100 MHz band and in addition, in London for example, also in the 900 MHz band (for details see here). Network operators in France, Finland and perhaps in a few other countries also use UMTS 900, but for the moment only for rural coverage outside the bigger cities.

And now I've come accross another example, this time from Australia. Telegeography reports that Telstra and H3G have run a 2100 MHz UMTS network together for the past couple of years while Telstra has run it's own UMTS 850 MHz network in addition as its workhorse. With the common 2100 MHz network now being shut down due to H3G having been acquired by Vodafone Australia, the report says that Telstra will continue using UMTS 2100 in some places. Also, if I am not wrong, UMTS 900 is used in Australia as well by one of the other network operators. So it's similar as in the US, where UMTS is run on 3 different frequency bands (in the US, it's 850, 1900 and 1700/2100 MHz).

I'm dwelling on this a little bit because of the LTE frequency challenge arising these days with LTE being used in a myriad of different frequency bands which makes it hard building devices that will work across the world. But as the examples above show we have already arrived there with penta-band UTMS now required for truly global access. So countries like Germany, where LTE is already deployed in three frequency bands (800 MHz, 1800 MHz and 2600 MHz) are not all that much different from other countries using UMTS in three frequency bands. Not that this makes the issue any easier but it is an interesting way to look at it.

Android and Africa from Someone Who Knows

With Android firmly established in the smartphone domain in developed countries, Huawei and others now seem poised to bring such smartphones also to emerging countries, with hardware that has quite a different price point.

$80 for a current IDEOS device from Huawei, a price point that seems afordable to quite an audience for example in Kenya. Erik Hersman, who doesn't only write about tech in Africa but who knows the countries there inside out, reports via links that in Kenya alone 350.000 Ideos smartphones were sold this year alone. Apps with local appeal have sprung up in the meantime and Google is fostering development and thus device take-up. After "less walk more talk" this could very well be the important jump-start required for the second revolution mobile technology brings to emerging countries.

And I wonder how Opera Mini adoption is doing on the Ideos!? For full web access, the browser offers an ideal combination of access to full web pages on low spec (RAM, processor power) smartphones in combination with only 2G EDGE coverage in most places. Not that I would give up Opera Mini on my high end smartphone with 3G network availability in most places as even here, it has its advantages, especially in trains and when roaming. But that's anther story already previously told.

Pad Revenue and the Happy Pipe

So far I was wondering why network operators around the world where enthusiastic about selling pad devices from a number of manufacturers that have not even implemented their core service, i.e. circuit switched voice calling and barely do SMS while on the other hand, Skype and other over the top voice services run just fine on them!?

Perhaps their pain is sweetened by the fact that people are willing to pay 500+ euros for the device, from which they get a commission if it is bought in their shop I suppose and on top they get a monthly service revenue for data usage. A real live example: Someone I recently met is very cost conscious when it comes to voice calls. It must be a cheap device and the choice of prepaid vs. postpaid and the cost per minute are carefully thought over before the cheapest one is picked. But when it comes to the pad, the floodgate opens the 500+ euros device is bought like it is nothing and a 25 euros per month service contract is certainly no barrier either.

So strong is the attraction of over the top services that people are willing to spend 25 euros a month on network service to get them in addition, yes, in addition (!), to another device and service contract / prepaid SIM for voice. Quite a "happy pipe" situation I would say.

Results of the 2.6 GHz LTE Frequency Auction in France

Compared to the UMTS auctions back in the year 2000, LTE frequency auctions seem to be of much less interested to the press. Without much fanfare, France has held its 2.6 GHz frequency auctions and finalized them this week, about one and a half year after similar proceedings in Germany and elsewhere. They are by no accounts the last, however, with LTE spectrum still unassigned in many other European markets, like for example the UK and Italy. According to this report from Telegeography, the following companies have acquired spectrum:

  • France Telecom (Orange) got 2×20 MHz of spectrum
  • Iliad (Free) also got 2×20 MHz with the highest bid. Iliad is currently building a 3G network with only 2×5 MHz of spectrum available and if they are anything as competitive with their prices as in the fixed line ADSL world, that won't last for long. So the 20 MHz of LTE spectrum is a good investment in the future.
  • Bouygues and SFR could each "only" get 15 MHz.

Total money spent on the licenses was just under 1 billion euros. This is significantly more than what resulted from the same auctions in Germany. Here the total amount spent for spectrum in the 2.6 GHz band was roughly 260 million euros (the total result was 4.4 billion euros but most of that money was spent on the 800 MHz digital dividend band). Also interesting that two of the incumbents settled for 2×15 MHz rather than trying to go for the full 2×20 MHz.

K9 Is Great In A World Where Everything Has To Be Simple

Here's a tribute to complexity once in a while:

On my mobile device my favorite eMail client is Profimail. Yes, it's little known but it is ultra configurable and does exactly what I want. It makes an LED on my phone blink when eMails come in, I can use POP3 and not IMAP because I like that, I can configure when and how messages are deleted on the mobile and on the server, I can configure how much of an email is downloaded in the background so I don't have to wait when reading the email later because only the header was loaded or incur high data charges while roaming because those multi megabytes of file attachments that are totally useless on the mobile device have been downloaded as well. On top it's user interface has a number of nice hidden gems that increase productivity and the built in file browser makes things very efficient for quick photo viewing or file attachments. Well, I think you get the message. There's also a version for Android but when I tried some months ago I couldn't get the visual indication working and also for some reason or other the program now and then had issues synchronizing while the phone was in dormant state. Perhaps this is fixed once I make the step from my current Nokia N8 to an Android phone (unless Nokia makes a radical turn around for the good of Meego) but perhaps also not.

As a consequence I gave K9 a try, and I have to say it's a fantastic eMail app for Android which is as configurable as Profimail. And on top, the visible indication for incoming emails and synchronization in the background word flawlessly. In a world where everything has to be simple and non configurable to appeal to a mass audience it's refreshing to see that there are programs out there that can be configured to do just what I want. Great!