The Myth Of Rising Telecoms Investment

Pretty much whenever quarterly reports are presented by telecommunication operators these days there is the usual note about the difficult situation they are facing due to the investment required to keep networks up to the rising data demand. But is this really the case? The 2012 report of the German telecom regulator (in English) has some interesting numbers on that.

In 2012, invest in fixed and wireless telecommunication networks in Germany was around 6 billion Euros. That's the combined sum of invest of all market players. Hast it risen in recent years? Not according to the report. In reality, invest has remained pretty much stable over several years and compared to the overall revenue in 2012 of around 58 billion Euros that seems a quite reasonable number, at least to me (see page 81 of the report).

Let's have a look at some more related numbers: While invest has remained stable, the number of employees in the telecom sector went down from 184.200 to 176.000 in Germany and the pressure can be felt. End customer prices might also have shrunken but I would argue that this was mostly compensated with higher use. This is reflected in a slight revenue decline from around 60 billion Euros in 2009 to 58 billion Euros in 2012. But when looking at the EBIDTA of Vodafone Germany which for 2012 was 3.359 billion Euros out of a revenue of 9.641 billion Euros then I don't really see big suffering.

30 Times More Data In Fixed vs. Wireless Neworks And Slowing Data Growth In Wireless

Once a year many telecom regulators in Europe publish their yearly analysis of the state of competition in the telecommunication market. A while ago, the German regulator has published it's report for 2012 (in English) which contains, among many many other interesting numbers, the amount of data transported through fixed in wireless networks in Germany.

As per the report, 4.3 billion gigabytes of data were transported through fixed line networks in Germany in 2012 (page 77) compared to 0.139 billion gigabytes (or 139.75 million GB to sound more impressive) in wireless networks (page 78). In other words, there's 30 times more data flowing to and from fixed line connections compared to wireless.

According to the report there are 28 million fixed line Internet connections in Germany today and thus the average monthly amount of data per line is around 12 GB. Also interesting is the rise of fixed line data from 3.7 billion to 4.3 billion gigabytes from 2011 to 2012, that's a rise of 16%. In wireless networks the amount of data transferred rose from 93 million GB to 139 million. That's a 30% rise which is quite substantial but far from the doubling or tripling the year before and the year before that respectively. In other words, the growth has been slowing down for a number of years now.

The report further says that there were 139 million mobile subscribers in Germany in 2012 out of which around 40 million are actively transferring data (page 79). This made me think a bit. I pay around 40 euros a month for my fixed line Internet and telephony connection today and around the same amount for wireless connectivity. And while the fixed line is shared, every family member has an individual mobile contract. So in effect I pay less for my fixed line connection when broken down per user compared to my wireless subscription and on top transfer over 30 times more data over it. Or put the other way round I pay more for my mobile subscription then for my fixed line and use it far far less.

All of this makes sense if wireless networks are more expensive to build and maintain than fixed line networks. But is it really cheaper to drag a fiber cable close to people's homes these days and then have a copper wire to each individual house or apartment compared to setting up a base station on a rooftop that servers one thousand users? I have my doubts.

The Fairphone – How Much Does What Cost?

Which device will be my next smartphone? I've made my choice and it will be the Fairphone. It's in the process of being built by a small company established in the Netherlands and the aim is to produce it with the people and the environment in mind. No children labor in African mines, fair wages for Chinese works and safe working conditions. In addition the company is open about the whole process of building the device and using an open operating system, i.e. Android and perhaps Firefox OS and Ubuntu in the future.

The device is in production now with shipment foreseen around Christmas time. One interesting piece of information I recently came across when I wanted to get an update on their status is the cost breakdown of the device's retail price of €325 based on a production run of 25.000 devices. Here are some noteworthy numbers:

  • €129 design, engineering, components, manufacturing
  • €4.75 prototyping
  • €4.25 reseller margin
  • €9 certifications (CE, GCF, RoHS, FCC, Reach) and testing
  • €63 taxes (VAT, etc.)
  • €11.75 personnel costs, office space, IT, travel
  • €11.00 legal, accounting
  • €6 events
  • €5.25 webshop hosting
  • €18.25 warranty costs
  • €11 interventions (sustainability, being fair to people and environment)

For the full details, see here. If you are interested in how a phone is built from scratch then the website is a treasure trove of information. Bring some time…

cURL for Throughput Testing

I was recently faced with the dauntingly tedious task of doing throughput testing which meant uploading and downloading files from HTTP and FTP servers and noting the average throughputs in each direction separately and simultaneously. This is fun for about 30 minutes if done by hand but gets very tedious and even confusing afterward as constantly triggering up- and downloads makes you loose your thread at some point when your mind wanders somewhere else during the downloads. So I decided to automate the process.

There must be about a zillion ways to do this and I chose to do it with cURL, a handy command line tool to upload and download files in just about any protocol used on the net, including http, ftp, pop, etc. etc. It's ultra configurable via the command line and has a great variety of output options that make later analysis such as averaging downloads speeds of different files very simple.

For doing repetitive downloads I came up with the bash script (works well under Ubuntu and MacOS):

#!/bin/bash
URL="http://ftp.xyz.com/name-of-file"
OUTFILE=test-down.csv
rm test-down.csv
curl $URL -o /dev/null -w '%{size_download}, %{speed_download}n' >>$OUTFILE
curl $URL -o /dev/null -w '%{size_download}, %{speed_download}n' >>$OUTFILE
curl $URL -o /dev/null -w '%{size_download}, %{speed_download}n' >>$OUTFILE

cat $OUTFILE

The URL variable holds the URL to the file to be downloaded. Obviously if you test high speed links, the server should have enough bandwidth available on its side for the purpose. The OUTFILE variable holds the name of the local file to which the file size and download speeds are written into. Then, the same curl instruction is run 3 times and each time, the result is appended to OUTFILE. While the script runs, each curl instruction outputs information about current speeds, percentage of the download completed, etc.

And here's my script for automated uploading:

#!/bin/bash
UPURL="http://xyz.com/test/upload.html"
LOCALFILE="10MB.zip"
OUTFILE="test-upload.csv"
rm $OUTFILE
curl  -d @$LOCALFILE $UPURL -o /dev/null -w '%{size_upload}, %{speed_upload}n' >> $OUTFILE
curl  -d @$LOCALFILE $UPURL -o /dev/null -w '%{size_upload}, %{speed_upload}n' >> $OUTFILE
cat $OUTFILE

The trick with this one is to find or build a web server as a sink for file uploads. The LOCALFILE variable holds the path and filename to be uploaded and OUTFILE contains the filename of the text file for the results.

Note the '.csv' file extensions of the OUTFILES which is convenient to import the results to a spreadsheet for further analysis.


Raising the Shields – Part 9: Open Flanks And Security Agencies Acting Like an Auto-Immune Diesease

It's been a while since part 8 of this series on how I've improved protection of my privacy in the face of massive human rights violations against my freedom and privacy by a number of security organizations around the world as revealed by Edward Snowden. I've said good bye to public instant messaging providers and have installed my own server for family internal communication together with secure end to end encryption. Certificate Patrol in the browser protects me of rogue SSL certificates, I've installed GnuPG for email encryption but found it unusable in practice, I've become a regular user of TOR, my browser automatically deletes cookies when I exit it and most importantly, Owncloud keeps my files, calendar and address book in my own domain. For details on all those things click on the "Privacy" link at the end of this post to see the previoius parts of this series. Despite all of this, however, I still feel there are a number of open flanks that still need to be addressed:

  • eMail: As a means of communication, email is completely broken and even encrypting the content will not make this form of communication secure. This is because there always needs to be a server somewhere in the Internet to store and forward messages and even if the content is encrypted, the subject, sender and receiver are not. So apart of encryption the only think that could at least make communication between my family members secure and private is to host my own email server at home and have all devices receive and send email via that server at home. This way at least the email and content we send between each other would be secure as that would never end up on an external server.
  • My RSS aggregator leaves trails: Not mentioned above is Selfoss, my self hosted RSS aggregator that I installed after Google decided to shut down its Reader cloud service. It's been a tremendous enabler so I'm quite happy Google shut down the only service apart from search that I used to use from them. One thing I'd really like to do when I have a bit of time is to TORify all aggregator web requests to keep information about which web sites I read private. That might be a bit on the paranoid side it's really nobody's business which web sites I'm interested in. Period.
  • Voice and Video calling: I still have to find a good replacement for Skype for communication between family members as a central server farm controlled by Microsoft knows about every call and every message I send over the Skype client. This is probably the most pressing issue that I have to address in the near future.
  • Metadata: One thing I can do little about is the metadata my communication creates. Phone companies record who calls me and whom I call, anyone observing my IP packets knows what websites I'm interested in, which bank I am a customer of, etc. etc. 

While I can still close a number of holes in my privacy armor, especially the meta data issue clearly shows that raising the shields is just treating the symptoms but is definitely not a cure for secret service agencies in many countries trampling on our human rights of freedom and privacy by collecting all data they can get hold of. I recently heard a pretty interesting analogy: Security agencies are like the immune system of the body, which detects and protects us from harm attacking our body. Without an immune system the body would not survive. But then there are autoimmune diseases where the immune system attacks the body which is ultimately fatal. And that's what just happening right now and we have to do everything to ensure that security agencies act as a proper immune system and not like an autoimmune disease. In other words, treating the symptoms by raising the shields is not enough, it's very important to treat the illness as well.

Where Did The 4 GSM Dots Come From?

Gsm-4-dotsWhen GSM was first launched in Europe a long long long time ago back in 19992 the GSM logo could quite often be seen in advertisements. It's gone a bit out of fashion in the past decade but every now and then I still stumble over it like in the image on the left I took in Bratislava. I've done a lot of research into the history of GSM but I could never find any information on the logo. Who designed it and what do the 4 dots you can see in the "M" of GSM signify. Who put them there? It's all a big mystery. If you know something about this, I'd be glad if you could share it in the comments below.

Bluetooth Revival Part 2: A Bluetooth Loudspeaker

Back in August I reported that I suddenly found a use again for Bluetooth connectivity after I haven't used it for quite some time thanks to my new Notebook now including a Bluetooth radio. Since then I've added yet another one. As I travel a lot I like to have a good loudspeaker connected to my PC in hotel rooms either for listening to music or for watching a movie streamed from the PC to the TV set in the room. When I looked for a Bluetooth enabled small loudspeaker two years ago I didn't find any, at least not in the size I wanted so I opted for a cable based loudspeaker. So I was quite surprised that two years later it's quite easy to find small Bluetooth enabled loudspeakers produced by many companies. Quite an interesting change. So I bought one and it just works fine with my Ubuntu powered notebook and with my Android smartphone as well. Great!

Two Updates Fail Massively Within The Hour – The Internet Came To The Rescue!

When updating the software of my devices I am usually a bit cautions and wait a couple of days after patches and updates are announced just in case it is discovered that the update breaks something and needs to be fixed. While this tactic usually works it somewhat failed me recently when two updates massively failed within the hour. Luckily the Internet came to the rescue.

The first major fail occurred when updating the Ubuntu distro on my media PC from 13.04 to 13.10. While everything looked fine during the update, the system would not let me log on after rebooting. All I saw was a mysterious error message after typing in my username and password before I was thrown back to the login prompt. WOT!? One can be skeptical about Google but they do find things quickly and thus rescued the day. Only a few hours before somebody posted a fix for the issue: Switch to the console and unistall Cinnamon and the Nemo file manager which I installed previously as Ubuntu has castrated the Nautilus file manager to be basically uselessness. Could this really be the issue? I was skeptical at first but it worked and I could log in again. So much about installing software from third party repositories… Fortunately, the Nemo file manager made it into the official Ubuntu repository in the meantime and I could re-install it from there without the issue re-appearing.

Once that was fixed I updated Thunderbird to apply the latest security patches on my production notebook. That can't possibly go wrong now can it? It seems it can as I stumbled right into the next issue. After restarting Thunderbird, the Lightning calendar just showed and empty window. WOT!? Two minutes and another Internet search and I found out that for a yet unknown reason, updating Thunderbird broke the plugin. The solution: Downgrade Thunderbird until Lightning is updated as well. Fortunately I do not use the pre-packaged Thunderbird so I can apply security patches quicker than with the default Ubuntu install. I guess that saved me this time as I just had to rename the directory and download the previous version again.

So was I too quick with the updates? Perhaps, but from a different point of view my somewhat cautious update behavior has saved me nevertheless. If I had updated both systems earlier I would probably not have found a fix for both issue on the net. And while I can live with a broken media PC for a while, a broken Thunderbird on my production notebook would have been totally unacceptable. So perhaps I waited just long enough.

Another takeaway from this is that without people out there sharing information via blogs, message boards and other means, things would be a lot more difficult and some even impossible. And it's not an overstatement, I still remember how desperate I was sometimes in the days when I 'only' had books e.g. to learn programming and getting stuck often meant hours searching for an answer in seemingly endless trial and error loops.

The Difference Between The Early Years of UMTS And LTE

Back last month I was musing about how 2G we still were only 10 years ago which makes it even more amazing what has happened since in the mobile domain. When looking at UMTS and LTE and how they were used in their early years I see a striking difference. When UMTS first launched it took quite a while for devices to actually become available and even longer before there was a general take-up. For quite a number of years, UMTS base stations were just sitting there producing heat but were actually little used.There were probably a number of reasons for this. One was certainly that mobile Internet access was a novelty and only used by few. In addition, content readable on small screens was even scarcer. And on top, screens were tiny by today's standards and devices were bulky. Not a good mixture for fast take-up.

With LTE the story is quite different. There was perhaps a time span of one year after the launch of networks during which networks were mostly used with LTE capable USB data sticks. The situation changed quite quickly, however, due to devices such as the Samsung Galaxy S-III and the iPhone becoming LTE capable. Yes, there were certainly other LTE smartphones available before those two but those still seemed to have something of an experimental character to me and probably weren't sold in high volumes. And LTE had the invaluable advantage that by the time the networks were launched, mobile Internet access had become mainstream, devices thin and screens large enough for enjoyable media consumption.

And here's my personal LTE timeline:

  • 2009: Lot's of talk about it, but nothing commercially deployed
  • 2010: Experimental network deployments
  • 2011: Mass rollout of networks
  • 2012: First LTE capable smartphones that could actually be bought became available
  • 2013: Real LTE usage with the iPhone 5 and Samsung Galaxy S-III

 

 

The DIY-CPU Project – Experimenting With RAM Using a Raspberry Pi

2013-09-22-1213-smNow that I know a couple of ways how to build a clock generator I've moved on to the next step and have started looking into what kind of RAM I want to use for my Do It Yourself CPU (DIY-CPU). Here's the story of how that went:

As it's an experimental system I only need a couple of bytes of RAM to hold a short program and a few bytes of data. I don't want to build it myself, however, as I feel confident that I understand how static RAM is composed of flip-flops, how flip-flops are built with gates and why they have two stable states. Therefore I had a look around for a small off-the-shelf RAM chip that I can.

The smallest static RAM chip I could find has a size of 8 kilobytes which sounds very little by today's standards of course. However, that is far beyond what I need anyway, I would have already been content with a 256 byte version. Anyway, so I bought an 8 kB CMOS RAM chip and for those who'd like to take a closer look you can find the datasheet of the WS6264 here.

To get a feeling of how I can write data into the chip and get it back out again I used a breadboard to experiment a bit. To make life a bit easier I decided not to use physical switches and LEDs to control and read data from the address and data buses. Instead I decided to use a Raspberry Pi with a Pi-Face extension board that offers 8 digital inputs and 8 digital outputs and the flexibility of Python programs to read and write data on the various pins of the RAM chip. As 8 inputs and 8 outputs are obviously not enough to control all of the 8 address bus pins, the 8 data bus pins and the various enable and set lines I had to simplify a bit and only use 3 bits of the address bus and 3 bits of the data bus and grounding all other lines. In other words I limited myself to reading and writing 2^3 = 8 bytes. More than enough to get a feeling for how to read and write data into the chip. In addition to the bus pins I also control the Output Enable line and Write Enable line of the chip with two output ports of the Raspberry Pi.

The first picture on the left shows this setup. The green cables between the breadboard and the Raspi go from the RAMs first three data bus pins to three input ports of the Raspi so I can monitor the data bus. The forth green cable is the common ground. The red cables between the Raspi and the RAM are the first three address bus pins that I can control with 3 output port pins of the Pi. The orange cable connects to the Write Enable line and the yellow cable connects to the Output Enable line.

To write something to a memory location I used a little Python program to put the address over the red cables on the address bus. The red button that can bee seen at the bottom of the image is used to set one of the 8 bits of the byte to either 0 or 1 while all other bits of the data bus are pulled to ground. This way I can cycle through the 8 bytes I can address and set one of the 8 bits to either 0 or 1 while all other bits are always 0. Remember, it's only to figure out how things work so there's no need to set all 8 bits of the byte. Just seeing that the bits come out correctly again later on is enough.

Screenshot-smOnce the address is on the address bus and I have either pressed the button or not to write a 1 or 0 respectively to one of the bits I activate the Write Enable line for a short time (by pulling it to ground) and release it again to commit the value to the chip. With a loop in the Python program I then go to the subsequent memory addresses and repeat the exercise seven more times. In a second loop that follows I cycle through all addresses again but this time I use the Output Enable line to put the stored data on the data bus and read it's values via the Pi-Faces input ports and display the result in a console window and also on the graphical simulation in the exported GUI as shown in the second figure on the left.

It took a couple of hours to get everything working but in the end I managed to figure out how to use the control lines to write and read my bits. Every day, I must read and write billions of bytes to a RAM chip by working with computers, smartphones and other devices. But that's something that happens in the background without me consciously doing it. With this experiment I have physically written and read bytes to and from memory by hand for the first time in my life. Quite an interesting thought 🙂

Also, I figured out how to pull the data bus lines to ground via 33k Ohm resistors so I can use the lines for both input and output which will be required in the next step when I hook up a number of other components such as registers to the bus that will be part of the CPU so I can transfer CPU instructions and data between them and the RAM chip. The order for the additional chips has already gone out and I will soon report how that went.