On Which Planet Does Swisscom Live with Their Wi-Fi Hotspots?

I've recently checked out a nearby hotel in Germany for a future conference for network coverage and what kind of Wi-Fi they are providing. Turned out that they have a Wi-Fi system provided by Swisscom. Many years ago I have used them once and was quite surprised that after 250 megabytes the connection was interrupted abruptly and I had to pay again. Since then I've made a big circle around them.

Surely this can't be the case these days anymore, especially not for the horrendous daily price of over 17 Euros they want per day!? Turns out that the very same limit is still in place. Swisscom, you can't be serious, 17 Euros a day is outrageous even without a transfer limit. We'll definitely stay away from your offer and go for an alternative. To bad for you, too bad for the hotel but the market has long since moved on to offer Wi-Fi as part of the price of the hotel room (I don't pay extra for towels either…), or if that is not the case, offer connectivity at a price you don't have to think about twice.

LTE Trials and LTE Trials

Every couple of weeks I am hearing of a new LTE trial of one network operator or another and I am beginning to wonder what they are actually trialing or what the word "trial" actually means!? Two years ago, when first network operators in the Nordic European countries trialed LTE, it was uncharted territory, the network hardware and software was in its infancy, mobile devices were hardly available and anything that had something to do with LTE was a trial which brought the technology one step closer to actually work.

Fast forward to the end of 2011 and the beginning of 2012 and many LTE networks are providing service to customers. However, some network operators are still trialing LTE or are just starting to do so. But these sorts of trials are quite different from those that happened two years ago. The technology is here, customers are using the networks already and despite some vital ingredients still missing such as a voice service (for which no good near term solution is at hand) and seamless handover to 3G.

In other words, LTE trials today are more of an exercise of network operators to warm up to the technology rather than poineer's work back some years ago.

Submarine Communication

When being abroad or accessing information stored on a server on another continent, the data traverses submarine cables, sometimes for more than 10.000 km in one hop before resurfacing at the other end. Quite a piece of technology I knew but little about so far. Recently, however, a comment to a post contained an interesting link to Greg's cable map of deployed submarine cables that contains interesting information about each cable and links to further details, many of them on Wikipedia. This entry contains general information about submarine cables and here, here and here, as an example, some information can be found on a particular cable (TAT-14) that is currently used in the Atlantic.

Here are some facts which I found interesting:

Capacity: The system capacity of the cable is given as 1.87 TBit/s or half in each direction by the third link above on the TAT-14 website. The two Wikipedia articles linked above give some more details on how the capacity is calculated but the descriptions do not match the 1.87 Tbit/s. The next cable generation seems to be close to be brought into operation, however, with the WASACE cable, foreseen to enter operation in 2014 having a capacity on Greg's cable map of 40 TBit/s, due to the use of 100 Gbit/s per wavelength instead of the current 10 MBit/s.

Signal regeneration: The German Wikipedia entry states that there is a repeater every 50-70 km but does not give a source from which this information was obtained. The English entry mentions the use of erbium-doped fiber amplifiers (EDFA) as repeater / amplification technology, also without a reference, that amplifies the light signal directly to light again without conversion into an electrical signal first. EDFA is also mentioned in the general Wikipedia article on optical data transmission (see section on "optical telephone cables") as the amplification technology for intercontinental cables.

Below ground: If possible, the cable is laid one meter deep into the sea bed to protect it from anchors and fishing nets that seem to frequently plague cables in areas where they are not well protected.

Lifespan: Cables laid at the end of the 1980's (i.e. before GSM was launched in 1992 to give a reference) such as TAT-8 (the first optical cable through the Atlantic!) and PTAT-1 were operated until 2002 and 2004 respectively. In less than 15 years, the cable's capacity of 20 GBit/s, which equals 40.000 telephone circuits according to the Wikipedia entry became only a fraction of the capacity offered by new cables coming online in those years such as the TAT-14 with a used system capacity of 1.87 TBit/s. This is two orders of magnitude greater. In other words, the 10-15 year telecoms cycle found in mobile networks (GSM – UTMS – LTE) applied to this field of telecommunication as well.

Internet @ Meetings: Traffic Shaping

Yes, the Internet at meetings topic keeps me interested and I keep refining my setup. The major problem of hotel or meeting room Wi-Fi is definitely the instability when too many client devices use the network simultaneously. But even if the local network doesn’t crash when it sees 80 to 100 devices simultaneously there is usually another side effect, the network becomes very slow.

This is because it doesn’t take a lot for the network to become congested. If the network is unmanaged, two or three devices using the network continuously over longer periods to upload or download data at the full bandwidth the network is capable of is all it takes to significantly slow down communication for everyone. Especially uplink congestion slows down TCP acknowledgment packets for downlink data to reach the server on the Internet in time which makes it impossible to make use of the full downlink bandwidth available.

To tackle uplink and downlink congestion I had a look if there is any way to shape uplink and downlink data streams on the router. Shaping individual services or streams is of little use as the majority of meeting participants use a VPN and thus, all communication flows over the same TCP or UDP stream. I was therefore looking for a way to shape all traffic to and from individual IP addresses to ensure that adequate uplink and downlink resources remain available even if several users transmit data simultaneously over a prolonged amount of time.

After quite a bit of research I found out how to perform traffic shaping the way I want on DD-WRT. Below’s the script I’ve come up with to shape traffic on a per-IP address basis with different maximum throughput speeds for up- and downlink transmissions. The uplink traffic is shaped on the WAN Ethernet interface while the downlink traffic is shaped on the LAN Ethernet interface for devices connected via a cable to the router and also on the two Wi-Fi Interfaces (2.4 and 5 GHz Wi-Fi) for the majority of devices that communicate wirelessly. Consequently, uplink traffic is only shapped on one port while the downlink traffic needs to be shaped on three interfaces.

In practice, my Netgear 3700 router with a 680 MHz processor can shape an aggregate traffic of around 40 MBit/s. In the vast majority of circumstances, thats much more than what the backhaul link provides anyway. In addition to limiting each IP address to a given maximum throughput, each IP address also gets its own IP transmit queue and packets are sent from all queues in a round-robin fashion. That means that even if the link gets congested, nobody needs to queue up behind a long queue of IP packets of heavy users as is the case in a default setup without any shaping with a single queue per interface.

And here’s the script:

#!/bin/sh

set -x

#the devices on which downlink shaping can be performed
DEV=eth0
DEV2=eth1
DEVWIFI0=wifi0
DEVWIFI1=wifi1

DOWN_SPEED=5000kbit
UP_SPEED=1500kbit

#The /8 IP subnet to use this script on
IP_NET=192.168.2.

tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit

tc qdisc del dev $DEV2 root                                          
tc qdisc add dev $DEV2 root handle 1: cbq avpkt 1000 bandwidth 100mbit 

tc qdisc del dev $DEVWIFI0 root                                          
tc qdisc add dev $DEVWIFI0 root handle 1: cbq avpkt 1000 bandwidth 100mbit

tc qdisc del dev $DEVWIFI1 root                                          
tc qdisc add dev $DEVWIFI1 root handle 1: cbq avpkt 1000 bandwidth 100mbit

#start with x.x.x.2 ip address (.1 is the gateway)
l=2

while test $l -le 253
do
    #
    # limit downlink bitrate
    tc class add dev $DEV parent 1: classid 1:$l cbq rate $DOWN_SPEED
         allot 1500 prio 5 bounded isolated
    tc filter add dev $DEV parent 1: protocol ip prio 16 u32
           match ip dst $IP_NET$l flowid 1:$l

    tc class add dev $DEVWIFI0 parent 1: classid 1:$l cbq rate $DOWN_SPEED
         allot 1500 prio 5 bounded isolated
    tc filter add dev $DEVWIFI0 parent 1: protocol ip prio 16 u32
           match ip dst $IP_NET$l flowid 1:$l

    tc class add dev $DEVWIFI1 parent 1: classid 1:$l cbq rate $DOWN_SPEED
         allot 1500 prio 5 bounded isolated
    tc filter add dev $DEVWIFI1 parent 1: protocol ip prio 16 u32
           match ip dst $IP_NET$l flowid 1:$l

    # limit uplink bitrate
    tc class add dev $DEV2 parent 1: classid 1:$l cbq rate $UP_SPEED allot 1500 prio 5 bounded isolated
    tc filter add dev $DEV2 parent 1: protocol ip prio 16 u32 match ip src $IP_NET$l flowid 1:$l
    

    l=$(($l+1))
done

What if you called a bandwidth crunch and no one came?

… asks Ajit Jaokar over at OpenGardens in a recent blog entry. The post is a great collection of thoughts of why the spectrum crunch, so often cited these days to be happening in the US might actually be a myth. I would even go so far as to argue that there is no bandwidth crunch at all. About a year ago, I've been comparing the amount of spectrum available to US carriers and the amount of spectrum available in Europe. For the details see here. In summary, the amount of spectrum available on both sides of the Atlantic is about the same but we certainly don't suffer from any bandwidth crunch over here in Europe, we are actually far from it in networks that are well dimensioned. Here's an example. So whatever the reasons are for slow networks in the US it's not a lack of spectrum. For additional background reading, here are two interesting posts from Dean Bubley on the topic, flattening data growth and O2 UK's data usage patterns.

Droidsheep: Firesheep Moves To Android

It looks like Wi-Fi hotspots remain a significant weakness in the overall security landscape. A year ago Firesheep was released and showed how easy it is with a notebook to spy on other users of non-encrypted public Wi-Fi networks and even use the stolen session credentials to do things like sending Tweets and Facebook messages. Some companies have reacted in the meantime by introducing or expanding the use of secure HTTPS sessions to protect their users but many services such as LinkedIn, eBay and others remain vulnerable to some degree to this day.

But security researchers haven't stopped there. Now, Firesheep has moved on to mobile devices in the scape of DroidSheep. All that's required is a rooted Android phone and the network is literally in the hands of an attacker. DroidSheep goes one step further than Firesheep and even has an ARP spoofing functionality so all traffic of the Wi-Fi hotspot is redirected to the mobile device before it traverses the router to the Internet. This allows spying on others even in encrypted networks (if the WPA password is known of course) which otherwise prevents Firesheep and Droidsheep from working due to individual session keys generated from the single password everyone uses.

No-cryptTo see what Droidsheep can do I tested it out myself in a private test network at home. Without much effort the program worked as advertised and it showed pretty much every site I was going to which was not using https. To my great astonishment I saw that the Facebook mobile app running on another Android device communicated with Facebook servers without any encryption! With Droidsheep I could take over the account in seconds and could write to the wall and do other things with the Facebook account I was using on my other device. In the settings, the app even admits that encryption is currently not supported as shown in the picture on the left.

For Wi-Fi hotspots to become an integral part of a cellular network offload strategy, these security issues have to be tackled. The solution that comes to my mind is to automatically start a VPN once a Wi-Fi hotspot is used and prevent any user traffic to be transferred while the VPN is not in place or has dropped due to an attack that takes down the VPN tunnel. Above all, the use of the VPN, blocking unencrypted data traffic and restarting the VPN must be fully automatic so it is fully transparent to the user. Otherwise it won't find widespread acceptance and use.

What Iliad/Free Could Do

Currently there are only three network operators in France but the launch of the fourth network operator ‘Free’ is imminent. Perhaps Free will be the last startup mobile network operator to launch in Europe with a network from scratch for the next decades as the trend in most other European countries is for network operators to merge. As Free is definitely all but an incumbent mobile network operator and had a significant impact on the French DSL market with its low prices and new services it is going to be interesting to see what kind of strategy they will be using to win over customers from other network operators. Just doing the same but a bit cheaper might not do the trick and it would be very much unlike the fixed net ‘Free’.

So here’s my wish list and thoughts of what they could do differently, keeping in mind the following resources that they have at their disposal:

  • 5 MHz of UMTS spectrum in the 2.1 GHz range
  • 20 MHz of LTE spectrum in 2.6 GHz
  • No LTE 800 MHz spectrum, but the right to use SFR as a host network for LTE in the 800 MHz band for deep indoor and rural coverage.

The Prepaid Voice and SMS Market

The first thing that comes to mind is the voice and SMS prepaid market in France, which, compared to other countries in Europe is totally underdeveloped. Prepaid prices are sky high with per minute charges ranging anywhere between 25 to 50 cents. Compare that to the 6.8 cents a minute in Austria or 8 to 9 cents a minute in Germany. Another really annoying thing is the credit validity time. Even a 35 Euro recharge extends the validity time by only three months. In other words, even on prepaid, a subscriber has to spend a minimum of 12.5 Euros a month or loose all credit still on the SIM card after three months. Prehaps Free can be different?

The Prepaid Mobile Internet Market

Free grew up in the French DSL market so their pitch to get to the customer was to offer high speed Internet access with a voice telephony flatrate to fixed line destinations put on top (replacing the POTS telephone of France Telecom). While the arrival of the iPhone and the iPad has triggered the emergence of prepaid data SIMs in France it’s by no means as cheap and ubiquitously available as in other countries, i.e. you pick up a SIM in a supermarket and start using it. Another untapped opportunity in France and I’d be one of the first customers picking up one of their prepaid data SIMs for my netbook and occasional visits in France in a French supermarket if prices in combination with the amount of data would be acceptable. Here’s a hint: In Germany you can buy prepaid SIM cards for Internet access for 3 euros a day with a limit of 1 GB, 10 euros a month for 500 MB or 20 euros for 5GB. And compared to Austrian prices that’s not even particularly cheap.

Free’s LTE Strategy

It’s going to be interesting to observe what Free will do with their LTE spectrum in the 2.6 GHz range. For launching their service, the 5 MHz slot in the 2.1 GHZ range for UMTS will suffice to offer good service for a while. But should customers decide to go for Free, a single 5 MHz channel won’t last for long, and most network operators have reacted in the meantime and are now using at least two 5 MHz channels in cities. Free will be able to do that too starting from 2013 as they will get a 5 MHz channel in the 900 MHz band. Here, they have an advantage over the incumbents as they can use it for UTMS straight away instead of going for GSM first, or having to free the channel of GSM first. The incumbents can use HSPA+ dual carrier in the 2.1 GHz band though, which gives them an advantage when it comes to maximum transmission speeds per user. So Free might be a bit releaxed when it comes to LTE. Once they want to go for it, however, it might be easier than launching their UMTS network in the first place since they have their 3G equipment in place now so they don’t have to find any additional sites, which is the main overhead in terms of costs and delay. Their equipment is likely to be multi-RAT capable so adding some more processing capacity and an additional radio module for the 2.6 GHz band will likely do the trick. If they were smart, the antennas installed over the past months are already 900 MHz and 2.6 GHz capable, making hardware modifications on top of a mast unnecessary. That would be quite an advantage to the other incumbent network operators who likely have to retrofit their base station sites with new antennas and perhaps also new base stations, or put additional LTE base stations next to their installed equipment. Actually it might even be a good idea to start with LTE as quickly as possible and sell 3G/LTE USB dongles as soon as possible so they can move the heavy users with notebooks and netbooks to LTE as soon as possible to reduce the load on their UMTS network to keep the quality of experience for their smartphone users.

IMS, Even Trickier With National Roaming

So what about voice over LTE? Here, Free is in a more difficult position than the already difficult situation incumbent mobile network operators are in. CS fallback is even more tricky because of national roaming. Falling back to your own network is already no fun and increases call setup time. Falling back to a network of a competitor due to the use of national roaming would be even more tricky. National roaming would also make VoLTE with handover to a circuit switched GSM channel (Single Radio Voice Call Continuity, SRVCC) to a competing network a huge challenge because two core networks of two companies would be involved. So I think both are not even a remote option for Free in the short and mid-term and if I were them I’d use LTE for Internet access, except perhaps for devices such as routers without mobility, where no fallback to a 2G or 3G network is required.

Femtocells

This could be an interesting one for Free. There are indications Free might include femto capabilities in their DSL home equipment. Depending on the take up, that could also reduce the load on their macro network as phone calls made at home would use the femto rather than their macro network by those customers who also use Free for their home Internet access. With attractive pricing of calls made via the femto at home they could perhaps make other people in the family switch to them as well.

Summary

Free has an impressive record in France when it comes to the disruption of the fixed DSL and cable marketing France. The wireless domain in France offers interesting opportunities already used in other countries in Europe that are waiting to being picked up. Also, their brand new access network should make it quite simple for Free to launch LTE services quickly to reduce the load on their limited UMTS spectrum. I suspect it won’t be long now and we’ll see first offers.

French LTE 800 Auction Results: Background and Questions

Just before Christmas 2011, the French regulator ARCEP announced the result of the auction of the 2×30 MHz spectrum in the 800 MHz digital dividend band. All three incumbents, Orange, SFR and Bouygues each won 2×10 MHz. Free has been unsuccessful to get spectrum directly but SFR is mandated by the spectrum auction terms and conditions to cooperate with Free, (probably) because they have won the 10 MHz that are in the middle of the frequency range.

Unfortunately, the press release does not quite reveal why it is SFR that is specifically required to share their spectrum with Free!? Also, the the 1 billion SFR paid for the middle section is 320 million more than what Bouygues had to pay for the lower section and still another 120 million higher than what Orange had to pay for the high part of the spectrum. Each of them still have 10 MHz. So I am not quite sure why the middle part is the most valuable part of the band. I could imagine that the lower part might imply some more coordination efforts with terrestrial TV transmissions which use the frequency range just below. But the higher part is unaffected by this so why is SFR required to share the network with Free and not Orange? The press release doesn't go into those details so if you know, please leave a comment, I'd be quite interested in this.

Other interesting bits and pieces mentioned in the press release:

  • All three network operators have committed to allow "full" Mobile Virtual Network Operators (MVNOs) in their network. It's not quite clear to me how "full" MVNOs are defined but let's hope it will encourage more competition and thus better prices and conditions for customers in the future compared to the very closed and non-competitive French wireless market compared to other countries in Europe today.
  • There are similar requirements as in Germany to ensure the 800 MHz spectrum is first used in rural areas to bring Internet connectivity to under-served areas. From the press release: "[the network operators] must commit to an accelerated rollout schedule in the most sparsely populated parts of the country".

Internet @ Meetings: DNS Performance And Blocking

One of the things I have observed in the past when offering Internet access at meetings is that ISPs return IP addresses of DNS servers which are only slowly or unreliably answer DNS requests. This is why by default I don't go to these anymore but use Google's own DNS servers reachable via the easily memorizable IP address 8.8.8.8. This IP address is reachable in all parts of the world I've been so far with only minimal latency.

Recently, however, when I was on the Philippines, I noticed that while the Google DNS server responded to ping requests, there were no answers to DNS requests. As the issue continued over a number of days it seems that either the outgoing DNS requests were blocked or incoming DNS responses were discarded before reaching me. I did a search on the web to see if there is any information whether the Philippines block foreign DNS servers but came up empty handed. If you know more, please let me know, that would be quite interesting.

Continental Bottlenecks and VPN Slowdowns

I've been on the Philippines recently for a round of meetings and it was interesting to observe the availability of bandwidth for non-secure and VPN encrypted traffic to oversea destinations.

During the later parts of the evening, at night and during the morning, I could easily transfer 2-3 MBit/s a second over my VPN connection. If I dropped the VPN connection and went to the same overseas websites directly and downloaded email (still encrypted over secure POP3 and secure SMTP), speeds were in the same region. During the day however, speeds over the VPN connection dropped to 200 to 300 kbit/s while unencrypted access to web resources in Europe and the US remained above the 1 MBit/s level. The difference remained even when I changed the VPN transport type from UDP packets to TCP packets terminating at the https port 443.

The same effect could be seen no matter whether I used the hotel Wi-Fi or the 3G network via a local SIM card. That likely means that it's the overseas link that becomes congested during the day. Just strange that encrypted connections were more affected by it than plain http traffic. Also, it didn't matter whether the VPN tunnel ended in the United States or in Europe, throughput levels were equally low during business hours.

I wonder if all that means there is a special preference for certain types of traffic for oversea links!? It was also interesting to observe that my company VPN, which works with a different technology than my private VPN based on OpenVPN was throttled down even further during the day time to 100 kbit/s and sometimes even less. Other participants at the meeting noticed the same behavior which is a bit of an annoyance if everyone depends on new versions of documents stored on a server abroad.

To overcome this limitation I mirrored some documents for the meeting on a local server on the IP subnet the Wi-Fi access point supplied. Unfortunately, not all participants could access local resources as many company firewalls and VPNs prevent devices to access local resources.