Still No UMTS and LTE in the Paris Metro

One and a half years ago I wrote a blog post about the growing pains of taking the Paris metro and accessing the Internet over the 2G network that just couldn't absorb the load anymore. At the time I noted that there were talks between the metro and one of the French network operators to deploy 3G and LTE in the metro. Sadly enough it still hasn't happened one and a half years later and the 2G network now just fails completely for Internet access. A sad state of affairs. How long do I have to wait before coming back and being positively surprised?

But to end this post with a positive note I'd like to add that outside the metro, using 3G has become a lot simpler from an international roaming point of view now because European roaming data rates of my home network operator have reached a level where day to day web browsing on the mobile and some data from the notebook is affordable enough so I don't have to ration things quite that strictly anymore. Good!

100 Gigabit/s Ethernet Backhaul At The Upcoming CCC Conference

… yes you read right, the upcoming Chaos Communication Congress will have a 100 Gbit/s Ethernet backhaul. When I first read it in the press I had a hard time to believe it but here's the original blog post on the CCC's web site (and they know what they are talking about…)

Last year's congress was attended by 6000 participants. If you divide one value by the other that's 16 Mbit/s per participant if everybody suddenly decided to download something at the same time. As this will unlikely be the case during any moment during the conference you can imagine what kind of connectivity experience one will have there. Unfortunately I've never been able to adapt to their timing. Next year perhaps.

Let's be a bit crazy and compare the 100 Gigabit/s link to, let's say the aggregate throughput of Vodafone Germany on new year's eve 2011 which I calculated was 7.9 Gbit/s. And the fixed line interconnect traffic of the German incumbent the same day peaked at 1.800 Gbit/s as reported here.

100 Gbit/s for 6000 congress participants. Sounds like a very very fat pipe indeed!

TCP, Fragmentation and How The MTU Controls The MSS…

Seldom but from time to time I encounter networks that my VPN struggles with. Sometimes the VPN tunnel is established just fine, pings are going through the tunnel but web browsing and other download activities just don't work. The effect is caused by fragmentation, i.e. the IP packet size of the downlink is too large for some part of the network between me and the server and hence the IP packet is split somewhere or simply discarded because it has become too long.

The remedy for such behavior is to reduce the Maximum Transfer Unit (MTU) for the tunnel interface on my computer to a lower values such as 1200 bytes and things come back to life. What I always wondered, though but never had the time to figure out was how the server is notified of the reduced MTU!?

Screenshot from 2013-11-20 21:09:06When I recently encountered the scenario again I had a closer look at the TCP connection establishment and found that the MTU size is contained in the first IP SYNchronize packet in the TCP header. The parameter that conveys the maximum size a TCP packet can have is contained in the Maximum Segment Size (MSS) parameter. The first image on the left shows the default MSS over my Ethernet at home, 1460 bytes. Together with the additional IP overhead the IP packet size is 1506 bytes. The MTU size configured on my Ethernet interface is 1500 bytes.

Screenshot from 2013-11-20 21:10:15When I change the MTU size on the fly on my Linux machine with 'sudo ifconfig eth1 mtu 800', my MTU size shown by 'ifconfig' becomes 800. The MSS size then becomes 760 bytes and the Ethernet packet is 814 bytes long. The 14 extra bytes are for the Ethernet header that is not counted in the maximum MTU size because the Ethernet header is discarded at the next router and replaced by another Ethernet header or some other protocol if the next hop is over a different network technology.

There we go, another mystery solved.

Mouse – Keyboard – Wifi – A Layer 1 Trace

Mouse - keyboard - wifiOver the years I've used Metageek's WiSpy USB tracer a lot to figure out what is going on in the 2.4 GHz Wi-Fi band. When I was recently investigating a slow Wi-Fi which I ultimately nailed to a runaway Wi-Fi card I also picked up the signals of my wireless mouse and keyboard alongside my own Wi-Fi signal. The image on the left shows the three signals. The mouse transmitted near channel 2, the keyboard near channel 7 and the Wi-Fi center frequency is on channel 11. The green dots in the lower part of the image even show when I used the mouse and when I used the keyboard. The Wi-Fi was pretty dormant during the trace and the image was only created by the beacon frames of the Wi-Fi access point.

Is It Ethical For A Nation To Infect 50.000 Computers With Digital Sleeper Agents?

Over the past days we've heard in the media that the NSA has infected at least 50.000 computers worldwide with digital sleeper agent software, as Techcrunch puts it. Obviously this has created a lot of outrage across the industry and also in the non-technical media. But despite all the outrage nobody really commented that actively infecting computers is by an order of magnitude worse from an ethical point of view than anything we have heard about the NSA's doings in recent months.

Listening passively on transmission links and harvesting data is one thing (which is already bad enough by itself), but infecting 50.000 computers with spyware is quite quite another. And I wonder who those 50.000 computers belong to!? Did the NSA really find that many terrorists out there? Somehow I doubt it. As if it isn't already bad enough that companies and individuals have to fight criminals trying to infect their PCs with malware that do all sorts of things like stealing passwords, extorting money, and so on. No, now we also have to defend ourselves against nation states doing similar things on a similar scale!?

It makes me wonder when this will go from accusation to proof? What it would take is the code or the executable of the malware and a link back to it's origin. With that in hand it wouldn't take long to actually find the malware in practice (unless all copies destroy themselves without leaving a trace). And then imagine the malware is found on computers of governments and private companies around the world. This is the point when the abstract becomes personalized. And when you look at what happened when the German Chancelor found out her phone calls were listened to you get an idea what is likely to happen in this case. Is it really possible to cover up 50.000 infections?

It really depresses me that a nation goes that far… And while we are at it: What makes us think it is only one nation who thinks it's a good idea to do such things?

My Smartphone Contacts The Network 10 Times Per Hour When Its Idle

One train of thought I followed with my easy smartphone Wi-Fi tracing setup I wrote about recently is how often a typical smartphone contacts the network per hour even if it is not used and just lies on the table and what impact that has on the cellular network in a larger context. Even though I monitored the devices behavior over Wi-Fi the result can be applied to cellular as well as it is likely that most applications do not make a difference anymore between Wi-Fi and cellular connectivity. The result is quite interesting:

Even without user interaction my smartphone contacts the network 10 times per hour. Out of these, 4 times is for checking email. Another 4 times Android calls home to Google, mainly using a Google Talk domain, even though I've disabled the app. Less frequently a DNS query and subsequent traffic can be observed to a number of additional Google domains. I feel quite observed by such unwanted behavior but there's something that can be done about it with a rooted phone as I've described here in the past. Further connections are made for various other purposes. My calendar and address book are synchronized with my Owncloud server at home every four hours, the NTP server is queried to keep the clock in synch, crash reports are sent to crashlytics.com (have I consented to this?), the weather app requests updates, the GPS requests ephemeris data periodically, etc.

So what does this mean on a larger scale? Let's say a network operators has 15.000 3G base stations (extrapolated from here) and 10 million smartphones. If those smartphones were evenly distributed across all base stations there would be around 660 smartphones per base station or around 220 smartphones per sector. If each smartphone connected to the network 10 times an hour that's 2200 requests an hour per sector. If the connection is held for 10 seconds on average, that's 2200 requests / (60 minutes * 60 seconds / 10) = 6 concurrent connections just for the background traffic of the devices.

Some cells are obviously busier than others so some cells probably see two or three times this number, i.e. 15-20 concurrent connections just for background traffic. As the number of concurrent users in 3G cells is likely to be less than a three digit figure that's quite a sizable percentage. And the 10 connections per hour is perhaps even a conservative number as many subscribers use instant messengers that need to send frequent TCP keep-alive packets so they don't loose connectivity to the server. On the other side, many smartphones are used over Wi-Fi, especially when people are at home, which is likely to significantly reduce background traffic over cellular in residential areas. Not so in business areas, however.

So where do we go from here? One good thing is that LTE networks are mostly in place now and many new smartphones, especially those of heavy users are LTE capable by now. That significantly reduces the load on 3G networks. And from what I hear the number of simultaneous users in an LTE cell can be much higher than in a 3G cell. The right technology at the right time.

The GSM Logo: The Mystery of the 4 Dots Solved

Gsm-logo-on-phoneA few weeks ago I asked the question here if anyone knew what the 4 dots in the GSM logo actually stood for. A few people contacted me with good suggestions what the dots could stand for, which was quite entertaining, but nobody really knew. On the more serious side, however, a few people gave me interesting hints that finally led me to the answer:

On gsm-history.org , a website of Friedhelm Hillebrand & Partners, and article is published that was written Yngve Zetterstrom. Yngve's been the rapporteur of the Maketing and Planning (MP) group of the MoU (Memorandum of Understanding group, later to become the GSM Association (GSMA)) in 1989, the year in which the logo was created. The article contains intersting background information on how the logo was created but it did not contain any details on the 4 dots. After some further digging I found Yngve on Linkedin and contacted him. And here's what he had to say to solve the mystery:

"[The dots symbolize] three [clients] in the home network and one roaming client."

There you go, an answer from the prime source!

It might be a surprising answer but from a 1980's point of view it makes perfect sense to put an abstract representation for the GSM roaming capabilities into the logo. In the 1980's, Europe's telecommunication systems were well protected national monopolies and there was no interoperability of wireless systems beyond country borders, save for an exception in the Nordic countries, who had deployed the analogue NMT system and who's subscribers could roam to neighboring countries. But international roaming on a European and later global level was a novel and breakthrough technical feature and idea in the heads of the people who created GSM at the time. It radically ended an era in which people had to remove the telephone equipment installed in their car's trunks (few could afford it obviously) if they wanted to go abroad, or to alternatively seal the system or to sign a declaration that they would not use their wireless equipment after crossing a border. Taking a mobile phone in your pocket over a border and use it hundreds or thousands of kilometers away from one's one home country was a concept few could have imagined then. And that was only 30 years ago…

P.S.: The phone in the image with the GSM logo on it is one of the very first GSM phones from back in 1992.

Tracing Smartphone Network Interaction over Wi-Fi

Smartphone-trace-setupOver the years I've come up wit a number of ways to trace the network traffic from and two a smartphone for various purposes. So far they all had in common that the setup took some time, effort and in some cases bulk hardware. So in quite a number of cases I shied away from taking a trace as the setup just took to long. But now I've come up with a hardware solution for Wi-Fi tracing that isn't bulky and set-up in 60 seconds.

Earlier this year I bought an Edimax USB powered Wi-Fi mini access point that I have since used many times to distribute hotel and office Wi-Fi networks to my devices. Apart from being small it's easy to configure and ready in less than a minute after being plugged into the USB port for power. To trace the exchange of data with a smartphone it only needs to be connected via Ethernet to the Ethernet port of my notebook that is connected to the Internet via another network interface, e.g. its own Wi-Fi card. In addition, the Internet sharing has to be activated for the Ethernet port of the PC. This is supported in Windows and also in Ubuntu in the network configuration settings.

Once done, Wireshark can be used to monitor all traffic over the Ethernet interface. If the smartphone is the only device served by the mini access point, only its traffic traverses the Ethernet interface and from there the Wi-Fi while the notebook's traffic goes directly to the notebook's Wi-Fi adapter. That means no special filtering of any sort is required to isolate data flowing to and from the smartphone. The figure on the left shows the setup. Super easy and super quick to setup.

Living In A Post-GSM World

While in Europe there are few network operators if any at this point in time thinking openly about shutting down their 2G GSM networks, network operators in other parts of the world are seriously contemplating it or have already done it.

One of the very first operators that shut down its 2G network was NTTDoCoMo in Japan. Agreed, it was a special case, it wasn't GSM it was a local proprietary solution, but still. Last year, AT&T has announced that they will shut down their GSM network in 2017. That's not so far away anymore and from what I can tell they are serious about it. And the latest example is a network operator in Macau according to this post on Telegeography. O.k. that's a special case again but still the number of 2G network shutdowns is growing.

It makes me wonder how much longer it will take in Europe before first operators seriously contemplate a move. Two or three years ago I still saw a meaning of having 2G EDGE networks in the countryside. Web pages were smaller than today, smartphone penetration was nowhere near today's level and web browsing over EDGE was still working, especially with network side compression. But today it has almost become impossible. As soon as there's only 2G network coverage in an area, all smartphones drop on that EDGE signal that completely becomes overburdened. And then there's the size of web pages that keeps growing and even smartphone optimized version of web pages come with lots of JavaScript and other niceties. It has come to the point that I have switched off 2G in my smartphone not only because there's no Wideband AMR but also because falling back to EDGE for data is just useless anyway.

Sure there are perhaps quite a number of 2G-only embedded modules in machines today (including the block heater of my car and my GSM controllable power socket) and 2G only mobiles in the hands of people. But I guess their number will not dwindle before an announcement is made. Sure, there will be lots of complaints especially from the embedded side.This makes me wonder how the story will look like in Europe!? With multi-RAT base stations it might not be very costly to keep GSM running in the future. As traffic goes down on GSM one could re-farm the spectrum and put LTE in the freed space or extend the bandwidth of existing LTE carriers. That inevitably means LTE will be deployed in many different bands simultaneously which will require efficient load balancing algorithms between the different carriers. But compared to other features such as SON, HetNet, etc. that should be rather simple to accomplish.

5 years ago I already speculated about the conditions for GSM phaseout and potential exit scenarios on this blog. Have a look here. The reasons for keeping a GSM network I listed 5 years ago are pretty much no longer here due to the emergence of LTE on high and low frequency bands and 3G devices now including the 900 band for Europe and at least two or three roaming bands. Good to see how technology has advanced. So let's see which of the exit scenarios I described in that five year old blog post will be used.

Electrical Power is Everywhere – A Model for The Future of the Internet?

When I recently flew over a big city in the very early hours of the morning I was amazed how many lights I could see despite most people being sound asleep. Tiny dots of light everywhere. What struck me then is that in our society, electrical power is so important and cheaply available that wires are dragged everywhere. There's a light bulb every couple of meters, obviously far more than than there are cellular base stations in the city. While cellular networks as we know them today have mobilized the Internet and brought it to many places there are still many many places even in well covered cities inside buildings and also outside with inadequate coverage. But even in these places there's electricity for lighting and many other purposes. As the importance of the Internet continues to rise it made me wonder if at some point we'll see a shift towards networks that are built in a similar way our electrical grid works today: There's a wire with a small transciever at the end dragged basically everywhere.

Light does not come from a central place. Instead, individual small light bulbs cover a small area. So perhaps we'll see a similar evolution in mobile networks!? Obviously, that's easier said than done as there are significant differences between the power grid and wireless networks:

First, there's usually no unwanted interference between two light bulbs compared to two radio transmitters that are close together. Also, transporting electrical power through a cable is much simpler than a multi megabit stream of data. But then we've transported electrical power through cables for a century and more now and technology has evolved. Another big difference is while wireless networks serve the public, wires for electrical power are usually put in place because the owner of a building requires power at a location for his own purpose and not for the public. Even lighting in public places follows a different rationale compared to wireless networks. In this scenario someone is interested in iluminating a place, e.g. for security reasons. What interest would someone have to install Internet connectivity in the same manner? And another challenge that comes to mind is that while the light bulb doesn't really care who delivers the power, wireless Internet connectivity is supplied by a number of different network operators, so installing little devices that distribute Internet connectivity would either require installing different boxes of different carriers or a new sort of device that could redistribute the connectivity of different providers.

But coming back to the basics, extending electrical power to the last corner is what we do in our society and it is done at an affordable price for the individual. It makes me wonder if something similar can be done in the Internet domain, how it will look like and how long it will take to realize it.