Living In A Post-GSM World

While in Europe there are few network operators if any at this point in time thinking openly about shutting down their 2G GSM networks, network operators in other parts of the world are seriously contemplating it or have already done it.

One of the very first operators that shut down its 2G network was NTTDoCoMo in Japan. Agreed, it was a special case, it wasn't GSM it was a local proprietary solution, but still. Last year, AT&T has announced that they will shut down their GSM network in 2017. That's not so far away anymore and from what I can tell they are serious about it. And the latest example is a network operator in Macau according to this post on Telegeography. O.k. that's a special case again but still the number of 2G network shutdowns is growing.

It makes me wonder how much longer it will take in Europe before first operators seriously contemplate a move. Two or three years ago I still saw a meaning of having 2G EDGE networks in the countryside. Web pages were smaller than today, smartphone penetration was nowhere near today's level and web browsing over EDGE was still working, especially with network side compression. But today it has almost become impossible. As soon as there's only 2G network coverage in an area, all smartphones drop on that EDGE signal that completely becomes overburdened. And then there's the size of web pages that keeps growing and even smartphone optimized version of web pages come with lots of JavaScript and other niceties. It has come to the point that I have switched off 2G in my smartphone not only because there's no Wideband AMR but also because falling back to EDGE for data is just useless anyway.

Sure there are perhaps quite a number of 2G-only embedded modules in machines today (including the block heater of my car and my GSM controllable power socket) and 2G only mobiles in the hands of people. But I guess their number will not dwindle before an announcement is made. Sure, there will be lots of complaints especially from the embedded side.This makes me wonder how the story will look like in Europe!? With multi-RAT base stations it might not be very costly to keep GSM running in the future. As traffic goes down on GSM one could re-farm the spectrum and put LTE in the freed space or extend the bandwidth of existing LTE carriers. That inevitably means LTE will be deployed in many different bands simultaneously which will require efficient load balancing algorithms between the different carriers. But compared to other features such as SON, HetNet, etc. that should be rather simple to accomplish.

5 years ago I already speculated about the conditions for GSM phaseout and potential exit scenarios on this blog. Have a look here. The reasons for keeping a GSM network I listed 5 years ago are pretty much no longer here due to the emergence of LTE on high and low frequency bands and 3G devices now including the 900 band for Europe and at least two or three roaming bands. Good to see how technology has advanced. So let's see which of the exit scenarios I described in that five year old blog post will be used.

Electrical Power is Everywhere – A Model for The Future of the Internet?

When I recently flew over a big city in the very early hours of the morning I was amazed how many lights I could see despite most people being sound asleep. Tiny dots of light everywhere. What struck me then is that in our society, electrical power is so important and cheaply available that wires are dragged everywhere. There's a light bulb every couple of meters, obviously far more than than there are cellular base stations in the city. While cellular networks as we know them today have mobilized the Internet and brought it to many places there are still many many places even in well covered cities inside buildings and also outside with inadequate coverage. But even in these places there's electricity for lighting and many other purposes. As the importance of the Internet continues to rise it made me wonder if at some point we'll see a shift towards networks that are built in a similar way our electrical grid works today: There's a wire with a small transciever at the end dragged basically everywhere.

Light does not come from a central place. Instead, individual small light bulbs cover a small area. So perhaps we'll see a similar evolution in mobile networks!? Obviously, that's easier said than done as there are significant differences between the power grid and wireless networks:

First, there's usually no unwanted interference between two light bulbs compared to two radio transmitters that are close together. Also, transporting electrical power through a cable is much simpler than a multi megabit stream of data. But then we've transported electrical power through cables for a century and more now and technology has evolved. Another big difference is while wireless networks serve the public, wires for electrical power are usually put in place because the owner of a building requires power at a location for his own purpose and not for the public. Even lighting in public places follows a different rationale compared to wireless networks. In this scenario someone is interested in iluminating a place, e.g. for security reasons. What interest would someone have to install Internet connectivity in the same manner? And another challenge that comes to mind is that while the light bulb doesn't really care who delivers the power, wireless Internet connectivity is supplied by a number of different network operators, so installing little devices that distribute Internet connectivity would either require installing different boxes of different carriers or a new sort of device that could redistribute the connectivity of different providers.

But coming back to the basics, extending electrical power to the last corner is what we do in our society and it is done at an affordable price for the individual. It makes me wonder if something similar can be done in the Internet domain, how it will look like and how long it will take to realize it.

The DIY-CPU Project: Shift Register Experiments

Sipo-smallNow that I know how to read and write data to the memory chip the next step in my Do It Yourself CPU project was to extend the possibilities of my Raspi Input / Output Interface. The 8 input and 8 output ports are not sufficient for controlling and testing the different functionalities I want to build over time. Already, outputing 8 data bits and 8 address bits at the same time is not feasible that way.

To fix this shortcoming I've decided to use several 8 bit serial to parallel converter chips. By chaining four of those chips together I only need 1 Raspi output port to write data to 32 output ports sequentially and one port for clocking in one bit after the other. To be able to individually enable output to the data bus, the address bus and the control lines (e.g. memory output enable, memory write enable, etc) that are accessible this way I need three additional Raspi output ports. In other word I need 1+1+3 = 5 lines to control 32 output ports. Not too bad.

The picture on the left shows the circuit I have put together to test the shift register approach. The memory chip is at the top of the breadboard, followed by a NOT gate chip that I use to invert the output of the first three bits of the data bus as the input bits of the Raspi work with negative logic. Also, this setup separates the Raspi Inputs from the bus lines which I decided to do because I noticed that they lower the voltage on the lines which could become a problem later on when I connect registers and other things to the bus line which will further influence the overall behavior.

An additional NOT gate on the chip is used to invert the Enable signal to the second serial to parallel (SIPO) converter chip which allows me to control whether I want to output data from the Raspi to the data bus or not (i.e. the outputs of the SIPO is in tri-state mode). The two chips at the bottom are MOS 4094 8 bit tri-state shift registers. For the experimental setup I used the first SIPO to output data to the address bus and the second one to output data to the address bus.

Yes, the setup starts to look a bit complicated. Time, therefore, to put what I have so far on a real board and solder things together.

Raising the Shields – Part 10: The Darkmail Inititative

Apart from video telephony, eMail is one of the services I have to use without encryption and is thus a thorn in my quest for having as much privacy as possible online and to protect myself from the doings of surveillance states. I tried my luck with the Thunderbird GnuPG plugin but in practice there are two many limitations for me (see here for the details). From my point of view the email system as we use it today is broken as far as confidentiality and privacy is concerned and there's no way to fix it. The only cure is a complete redesign with security and privacy in mind. This is where the Darkmail Initiative comes in.

Founded by Ladar Levinson, owner of 'Lavabit', a company that offered secure email storage and that recently had to shut down to prevent the US government from spying on their users after having to hand over their SSL encryption keys, Darkmail sets out to fix that particular problem by designing a new email system with built-in end-to-end encryption. This way, the user is in full control of encryption and services providers can no longer be forced to reveal SSL keys or other sensitive information.

This is the way it ought to be! Instead of just tapping and analyzing all data, surveillance of email will become more selective again as the only point where the email is decrypted is on a person's device. And while I don't support general surveillance of the Internet I very much support targeted tapping to keep us save, provided that a warrant has been obtained from a judge after providing evidence as to its necessity.

Here's a link to a video of an interview the Huffington post did with Lavar a couple of days ago. Apart from a general introduction he also briefly discusses the impact end-to-end encryption will have for online email services such as Google, Yahoo, Microsoft and others. Think targeted adds based on automated scanning of email content (which is no longer possible on the server side)…

End-to-end encryption is the only way to keep email private and confidential. As current methods are insufficient I fully welcome this initiative and decided to back it over at Kickstarter where Ladar is raising money to fund this open source project. Have a look perhaps that's something you'd like to support as well.

And yes, I assume the 'dark' in Darkmail refers to the connection going 'dark' (i.e. being encrypted and not breakable) rather than implying dark dealings 🙂

How To Get A Prepaid SIM For Internet Access At Seoul Airport

When I was in Seoul a year ago for the first time I was suffering a bit as Korea is one of the few countries I traveled to that didn't have prepaid SIMs for mobile Internet access. But the world keeps changing, I thought, so before going to Seoul again recently I did a quick search on the Internet if the situation had changed. And indeed it had, there's now a KT Telecom MVNO called Evergreen offering prepaid voice and data services with SIM cards that can be bought in convenience stores at the airport. Eureka!

Of course I tried it out and the basic steps described here are quite simple. Following the instructions it took me only a few minutes to buy the SIM card for 30.000 Won (€21) with a balance of the same amount on it at Incheon airport. After a couple of minutes and one or two device reboots, again as per the instructions, the phone registered with the network. Fortunately the phone configured the APN on its own as there was no mention of it in the instructions. So far so good. In the default configuration, the 30.000 Won are good for around 590 MB of data (55.8 Won per MB) which is quite a lot already.

Evergreen-sm2To get a higher data volume from the credit on the SIM it is possible to activate a data package, e.g. the 1GB package for 16.500 Won. This is were things get a bit tricky. The first step is to download the “Evergreen Mobile Services” App from the app store (Evergreen is the name of the MVNO on the KT Olleh network). The app can then be used to activate the data package. Unfortunately once that is done, Internet connectivity is cut because the service expects the user to top up this amount or to move the amount from the 30.000 Won from the voice bucket to the data bucket. The later operation can be done with the App but only if you find an alternative Internet access over Wi-Fi, as the connection was cut due to the activation of the bundle. No, that does not make sense and it's very inconvenient, but that's how it worked for me. After finding an open Wi-Fi access point I could then transfer the credit from the voice bucket to data bucket with the mobile app and Internet connectivity was restored instantly. A nice side benefit of activating the data option is that the remaining balance of the SIM card can be used for voice calls without cutting into the available data volume. So instead of 3 Euros a minute to Europe I only paid around 46 cents.

Apart from this slight activation hickup, the service worked well and I always got throughput rates of several megabits per second when I tired. So if the 590 MB are sufficient for your trip, don't bother with the activation of the data option. If you want more, make sure you have a Wi-Fi hotspot available before you attempt to activate a data bundle.

The Myth Of Rising Telecoms Investment

Pretty much whenever quarterly reports are presented by telecommunication operators these days there is the usual note about the difficult situation they are facing due to the investment required to keep networks up to the rising data demand. But is this really the case? The 2012 report of the German telecom regulator (in English) has some interesting numbers on that.

In 2012, invest in fixed and wireless telecommunication networks in Germany was around 6 billion Euros. That's the combined sum of invest of all market players. Hast it risen in recent years? Not according to the report. In reality, invest has remained pretty much stable over several years and compared to the overall revenue in 2012 of around 58 billion Euros that seems a quite reasonable number, at least to me (see page 81 of the report).

Let's have a look at some more related numbers: While invest has remained stable, the number of employees in the telecom sector went down from 184.200 to 176.000 in Germany and the pressure can be felt. End customer prices might also have shrunken but I would argue that this was mostly compensated with higher use. This is reflected in a slight revenue decline from around 60 billion Euros in 2009 to 58 billion Euros in 2012. But when looking at the EBIDTA of Vodafone Germany which for 2012 was 3.359 billion Euros out of a revenue of 9.641 billion Euros then I don't really see big suffering.

30 Times More Data In Fixed vs. Wireless Neworks And Slowing Data Growth In Wireless

Once a year many telecom regulators in Europe publish their yearly analysis of the state of competition in the telecommunication market. A while ago, the German regulator has published it's report for 2012 (in English) which contains, among many many other interesting numbers, the amount of data transported through fixed in wireless networks in Germany.

As per the report, 4.3 billion gigabytes of data were transported through fixed line networks in Germany in 2012 (page 77) compared to 0.139 billion gigabytes (or 139.75 million GB to sound more impressive) in wireless networks (page 78). In other words, there's 30 times more data flowing to and from fixed line connections compared to wireless.

According to the report there are 28 million fixed line Internet connections in Germany today and thus the average monthly amount of data per line is around 12 GB. Also interesting is the rise of fixed line data from 3.7 billion to 4.3 billion gigabytes from 2011 to 2012, that's a rise of 16%. In wireless networks the amount of data transferred rose from 93 million GB to 139 million. That's a 30% rise which is quite substantial but far from the doubling or tripling the year before and the year before that respectively. In other words, the growth has been slowing down for a number of years now.

The report further says that there were 139 million mobile subscribers in Germany in 2012 out of which around 40 million are actively transferring data (page 79). This made me think a bit. I pay around 40 euros a month for my fixed line Internet and telephony connection today and around the same amount for wireless connectivity. And while the fixed line is shared, every family member has an individual mobile contract. So in effect I pay less for my fixed line connection when broken down per user compared to my wireless subscription and on top transfer over 30 times more data over it. Or put the other way round I pay more for my mobile subscription then for my fixed line and use it far far less.

All of this makes sense if wireless networks are more expensive to build and maintain than fixed line networks. But is it really cheaper to drag a fiber cable close to people's homes these days and then have a copper wire to each individual house or apartment compared to setting up a base station on a rooftop that servers one thousand users? I have my doubts.

The Fairphone – How Much Does What Cost?

Which device will be my next smartphone? I've made my choice and it will be the Fairphone. It's in the process of being built by a small company established in the Netherlands and the aim is to produce it with the people and the environment in mind. No children labor in African mines, fair wages for Chinese works and safe working conditions. In addition the company is open about the whole process of building the device and using an open operating system, i.e. Android and perhaps Firefox OS and Ubuntu in the future.

The device is in production now with shipment foreseen around Christmas time. One interesting piece of information I recently came across when I wanted to get an update on their status is the cost breakdown of the device's retail price of €325 based on a production run of 25.000 devices. Here are some noteworthy numbers:

  • €129 design, engineering, components, manufacturing
  • €4.75 prototyping
  • €4.25 reseller margin
  • €9 certifications (CE, GCF, RoHS, FCC, Reach) and testing
  • €63 taxes (VAT, etc.)
  • €11.75 personnel costs, office space, IT, travel
  • €11.00 legal, accounting
  • €6 events
  • €5.25 webshop hosting
  • €18.25 warranty costs
  • €11 interventions (sustainability, being fair to people and environment)

For the full details, see here. If you are interested in how a phone is built from scratch then the website is a treasure trove of information. Bring some time…

cURL for Throughput Testing

I was recently faced with the dauntingly tedious task of doing throughput testing which meant uploading and downloading files from HTTP and FTP servers and noting the average throughputs in each direction separately and simultaneously. This is fun for about 30 minutes if done by hand but gets very tedious and even confusing afterward as constantly triggering up- and downloads makes you loose your thread at some point when your mind wanders somewhere else during the downloads. So I decided to automate the process.

There must be about a zillion ways to do this and I chose to do it with cURL, a handy command line tool to upload and download files in just about any protocol used on the net, including http, ftp, pop, etc. etc. It's ultra configurable via the command line and has a great variety of output options that make later analysis such as averaging downloads speeds of different files very simple.

For doing repetitive downloads I came up with the bash script (works well under Ubuntu and MacOS):

#!/bin/bash
URL="http://ftp.xyz.com/name-of-file"
OUTFILE=test-down.csv
rm test-down.csv
curl $URL -o /dev/null -w '%{size_download}, %{speed_download}n' >>$OUTFILE
curl $URL -o /dev/null -w '%{size_download}, %{speed_download}n' >>$OUTFILE
curl $URL -o /dev/null -w '%{size_download}, %{speed_download}n' >>$OUTFILE

cat $OUTFILE

The URL variable holds the URL to the file to be downloaded. Obviously if you test high speed links, the server should have enough bandwidth available on its side for the purpose. The OUTFILE variable holds the name of the local file to which the file size and download speeds are written into. Then, the same curl instruction is run 3 times and each time, the result is appended to OUTFILE. While the script runs, each curl instruction outputs information about current speeds, percentage of the download completed, etc.

And here's my script for automated uploading:

#!/bin/bash
UPURL="http://xyz.com/test/upload.html"
LOCALFILE="10MB.zip"
OUTFILE="test-upload.csv"
rm $OUTFILE
curl  -d @$LOCALFILE $UPURL -o /dev/null -w '%{size_upload}, %{speed_upload}n' >> $OUTFILE
curl  -d @$LOCALFILE $UPURL -o /dev/null -w '%{size_upload}, %{speed_upload}n' >> $OUTFILE
cat $OUTFILE

The trick with this one is to find or build a web server as a sink for file uploads. The LOCALFILE variable holds the path and filename to be uploaded and OUTFILE contains the filename of the text file for the results.

Note the '.csv' file extensions of the OUTFILES which is convenient to import the results to a spreadsheet for further analysis.


Raising the Shields – Part 9: Open Flanks And Security Agencies Acting Like an Auto-Immune Diesease

It's been a while since part 8 of this series on how I've improved protection of my privacy in the face of massive human rights violations against my freedom and privacy by a number of security organizations around the world as revealed by Edward Snowden. I've said good bye to public instant messaging providers and have installed my own server for family internal communication together with secure end to end encryption. Certificate Patrol in the browser protects me of rogue SSL certificates, I've installed GnuPG for email encryption but found it unusable in practice, I've become a regular user of TOR, my browser automatically deletes cookies when I exit it and most importantly, Owncloud keeps my files, calendar and address book in my own domain. For details on all those things click on the "Privacy" link at the end of this post to see the previoius parts of this series. Despite all of this, however, I still feel there are a number of open flanks that still need to be addressed:

  • eMail: As a means of communication, email is completely broken and even encrypting the content will not make this form of communication secure. This is because there always needs to be a server somewhere in the Internet to store and forward messages and even if the content is encrypted, the subject, sender and receiver are not. So apart of encryption the only think that could at least make communication between my family members secure and private is to host my own email server at home and have all devices receive and send email via that server at home. This way at least the email and content we send between each other would be secure as that would never end up on an external server.
  • My RSS aggregator leaves trails: Not mentioned above is Selfoss, my self hosted RSS aggregator that I installed after Google decided to shut down its Reader cloud service. It's been a tremendous enabler so I'm quite happy Google shut down the only service apart from search that I used to use from them. One thing I'd really like to do when I have a bit of time is to TORify all aggregator web requests to keep information about which web sites I read private. That might be a bit on the paranoid side it's really nobody's business which web sites I'm interested in. Period.
  • Voice and Video calling: I still have to find a good replacement for Skype for communication between family members as a central server farm controlled by Microsoft knows about every call and every message I send over the Skype client. This is probably the most pressing issue that I have to address in the near future.
  • Metadata: One thing I can do little about is the metadata my communication creates. Phone companies record who calls me and whom I call, anyone observing my IP packets knows what websites I'm interested in, which bank I am a customer of, etc. etc. 

While I can still close a number of holes in my privacy armor, especially the meta data issue clearly shows that raising the shields is just treating the symptoms but is definitely not a cure for secret service agencies in many countries trampling on our human rights of freedom and privacy by collecting all data they can get hold of. I recently heard a pretty interesting analogy: Security agencies are like the immune system of the body, which detects and protects us from harm attacking our body. Without an immune system the body would not survive. But then there are autoimmune diseases where the immune system attacks the body which is ultimately fatal. And that's what just happening right now and we have to do everything to ensure that security agencies act as a proper immune system and not like an autoimmune disease. In other words, treating the symptoms by raising the shields is not enough, it's very important to treat the illness as well.