Sunspider Smartphone to Notebook Speed Comparison – 2014 Update

At the end of 2012 I had a post in which I described the results of a speed comparison between notebooks and smartphones. One and a half years later I decided it's time for another look and see if and how the world has moved on.

Again, I've been using the Sunspider test suite that runs in a web browser. Not only hardware has moved on but browsers might have a more optimized Javascript engine in the meantime and the Sunspider test suite has also been updated from version 0.9.8 used in the previous post to version 1.0.2 used in this post. At least on the notebook side it doesn't make any difference, however, as the result of the benchmark on the same Intel i3 2367M, 1.4 GHz driven notebook came in with almost exactly the same result (416 vs. 410 ms) as one and a half years ago.

So here are my 2014 results with current hardware:

178 ms, Macbook Pro, Intel i7, 2.4 GHz, Firefox 28, OS X 10.9.2

260 ms, Lenovo E330, Ubuntu 12.04, Firefox 28, 260ms (Intel i3-2348M CPU @ 2.30GHz)
534 ms, Lenovo E330, Ubuntu 12.04, virtualized Windows 7 running on the Ubuntu host)

416 ms, Lenovo E130, Ubuntu 12.04, Firefox 28, (Intel i3, 2367M, 1.4 GHz)
—> 410 ms, direct comparison to Sun Spider 0.9.8 with Firefox 16.0.2 in the previous test

411 ms, iPhone 5S (€700+), ARM64, native browser, result taken from here.

(1266 ms, Netbook,  Intel Atom N270 (first generation), 1.6 GHz, Firefox 16.0.2, Ubuntu 12.04, (2009))

1376 ms, mid-range Android 4.2.2 based smartphone (€250), Opera Mobile browser

1928 ms, low end Android 4.3 based device (€130)

The direct comparison shows that both the notebook and the smartphone worlds have moved on significantly. The iPhone 5s has twice as much single CPU power than its predecessor and my current notebook based on an i3 processor is twice as fast as the notebook I used one and a half year ago. The mid-range Android phone now has the CPU power a flagship Android smartphone had one and a half years ago. Note that I didn't measure the 2009 Intel Atom based netbook again (hence the line is in brackets) but just put it here for comparison sake to show where fast smartphones sold today are compared to netbooks of the 2009 timeframe. Quite impressive!

The State of LTE Carrier Aggregation in Practice

LTE networks are up and running for five years now and we have certainly come a long way in terms of speed, stability and usable devices since 2009. The next step in the race for ever faster speeds is Carrier Aggregation (CA), i.e. the simultaneous use of several LTE carriers in different bands. While a lot has been specified to have a lot of flexibility in practice I mainly see the following CA deployments in the field today:

South Korea and the US seem to be the countries with the most pressing need for CA as for various reasons they are limited to 10 MHz carriers. Verizon for example has thus started deploying carrier aggregation of two 10 MHz carrier, one in the 700 MHz band and one in the 1700/2100 AWS for a combined bandwidth of 20 MHz.

In Europe, Germany seems to be the country most interested in Carrier Aggregation. Here, operators already have 20 MHz carriers on air in the 1800 MHz and 2600 MHz band (bands 3 and 7). In addition, three operators have a 10 MHz carrier in the 800 MHz band (band 20). In other words, they use carrier aggregation to go beyond the 20 MHz they already have. One network operator combines spectrum in their 800 MHz and 2600 MHz for a total carrier bandwidth in the downlink direction of 30 MHz. Another operator is about to aggregate resources in the 1800 MHz and 2600 MHz band for a total of 40 MHz, i.e. twice the bandwidth that is aggregated by Verizon in the US.

So far, only few devices support Carrier Aggregation but by the end of 2014 I expect that it will be quite a handful so from my point of view this is the state of the art in deployed networks at the moment. Looking a bit into the future there are a couple of further enhancements in the pipeline. On the one hand, data transmission rates could be increased by using more than the two antennas on the base station and mobile device side. 4×4 MIMO has been trialed already but the difficulty is how to get more than the 2 antennas per sector on rooftops without increasing the size and weight of the antennas unduly. On the mobile device side there's a similar dilemma, perhaps not so much in weight but in available space for even more antennas. Time will tell. And also a bit further down the road is carrier aggregation with three independent component carriers. 3GPP has just recently standardized the new device categories 9 and 10 for the purpose with a theoretical maximum downlink speed of 450 Mbit/s (20 MHz = 150 Mbit/s, 40 MHz = 300 Mbit/s, 60 MHz = 450 Mbit/s). This whitepaper by Nomor research contains some interesting details on this.

The Selfoss Advantage – Full Articles Instead of RSS Stumps

And here's another post on Selfoss, my RSS server at home that aggregates the news and posts from the websites I'm interested in: What I only realized when going through the code is that for a number of predefined news websites, the server does not only grab the content of the RSS feed but actually downloads the text of the full article linked in the RSS feed and inserts it in my reading list. That's particularly useful when I read my news stream on a mobile device as especially in this scenario I'm not fond of having to open up new tabs and having to wait for a full desktop size web page to download. A really nice feature!

Unfortunately this functionality requires dedicated code for each web site as all the 'custom clutter' around an article's text needs to be removed. But again, open source shines in this regard. As there are a few web sites on my reading list that only offer text stubs in their RSS feed and overblown web pages that take too long to download I decided to something about it. So I recently spent some time extending Selfoss a bit to expand to the full text content for those feeds. Once done and tested I learned how to use GIT and Github and offered my code via a 'pull request' to the Selfoss project. And indeed, my code will be part of version 2.11 to be published soon.

I really like open source 🙂

I Can’t Reach My Server – The Internet Must Be Down

There is a running gag between me and my fellow engineers I work with, that if the website of one of the most popular tech magazines in Germany can't be reached that the whole Internet must be down. This is because whenever one of us wants to check if there is Internet connectivity we type in the URL of this website to see if we can reach the site as it is (almost) always reachable.

So far so good. Recently, I was reminded of this running gag when I was in Taiwan and wanted to reach one of my servers at home and got no response either via the DSL line they are connected to nor via the LTE backup link. A quick interrogation of my GSM power socket via SMS revealed that there was no power outage either. So what could be the reason for this!?

As a next step I performed a traceroute and noticed that up to the edge of my provider's network in Germany, everything was working. After that, however, responses stopped coming in. So indeed for about half an hour the fixed line and wireless network of one of Germany's largest network operators was not reachable from the outside. Few probably noticed as it was 3 am local time. As I was in Taipei, however, it was 9 am for me and I did notice.

I wonder what will happen next time I travel!? I've had a DSL outage before while I was traveling, a city wide power outage interrupted communication last December when I was on vacation, a power outage caused by construction work while I was on vacation and now a backbone router outage on another trip. And whenever I think that I can't imagine anything else, reality shows me another possibility.

Welcome to the connected world!

Shell Windows, DOS Boxes and Real Teletypes

If you are old enough, you might remember that 'shell' windows and 'DOS boxes' on graphical user interfaces are a virtualization of physical terminals that were connected over a serial interface to a mainframe computer. Sounds pretty ancient today, doesn't it!? But actually that wasn't the beginning as those hardware terminals with keyboards and screens were themselves a virtualization of the initial input output device connected to a computer over a serial interface: A mechanical teletype machine and paper tape reader.

When I recently did some history research I came across this video on YouTube that shows how an original teletype machine was used before the days of hardware terminals with screens to load programs from paper tape into an Altair 8800, the first personal computer. Once the program is loaded the teletype is then used to type-in instructions and programs and see the result printed out on paper. One thing is for sure. After watching this video you'll never quite look at a shell window and a blinking cursor as before.

Wikipedia LTE Band Sorter

A quick entry today for a feature I've just found that I find incredibly helpful. As those of you involved in LTE know, the LTE band numbers were assigned in order of request and not the frequency range that is covered by a band number. In other words, tables that are sorted on the band number jump widely through the frequencies. Quite a number of times I wished for a list that is sorted by frequency and not band number, e.g. to see overlaps or proximity. It turns out that Wikipedia just has that option. Have a look at the LTE band table in the E-UTRA entry: In the header section there are small up- and down arrows for each column so the table can be sorted on any column. How neat!

Race To Sleep

I'm not actually sure who coined the term 'Race to Sleep' but I seem to hear it more often these days.

The idea behind it is to speed up an operation to be able to go into a very low power sleep state quicker after the operation at the expense of a higher peak power requirement during the operation itself. When 'Race to Sleep' works the overall energy required for the the faster execution + longer sleep time (as a reward) is lower compared to a previous architecture in which the operation took longer with less peak power drawn but a shorter sleep time. The 'operation' can be just about anything: Raw computing power, more complexity to speed up data transmission, GPU power, etc.

Does this really work in practice or is it just a myth? It seems it can work and AnandTech wrote a very detailed post on this phenomenon comparing power consumption for the same operations between a mobile device and its successor version. Have a look here for the details.

But at the end of the post he also notes that in practice, the gain when for example downloading and rendering a web page faster with higher power requirements and then make up for it by being in a sleep state for a longer time than before may be eaten quickly by users browsing the web more quickly because pages are loaded more quickly and thus they can start scrolling earlier.

So perhaps 'Race to Sleep' is most effective when a task that is sped up does not result in extra power expenditure later on due to the user being able to interact with a device even more quickly than before.

Change In the Past 5 Years – PC vs. Mobile

When I look back 5 years I noticed that the speed of change in the PC sector is quite different from what happened in mobile. Going back 5 years to the 2008/2009 timeframe, Windows Vista was the dominating (but not very much loved) operating system and 2009 saw the launch of the not so different (but much more loved) Windows 7 that still dominates the PC today. Also, I still use the notebook I bought back in 2008 for what it was intended at the time, as a desktop PC replacement. It has a dual core Intel Centrino CPU, 4 GB RAM and a 256 GB hard disk. Performance wise it plays DVDs and streams video content just as my latest and greatest notebook does. From a user input response point of view it doesn't feel any different in terms of speed to the machine I mostly use today. This switch, however, was not made because the machine has become inadequate performance wise but because it was bought as a desktop replacement without mobility in mind that I need today.

It's not that there haven't been advances in the technology in the past 5 years in this sector but they pale in comparison to what happened in mobile. Back in the 2008/09 timeframe, Symbian and Windows Mobile were the dominating operating systems at the time. While Windows 7 is still alive and kicking on the desktop, those two mobile operating systems are pretty much extinct by now having been replaced by mobile operating systems such as the Linux based Android OS that launched in the 2008. When you think about how Android looked then and what it's capabilities were and compare it to today the difference is truly remarkable. If you don't remember how the first Android looked like, have a look at the picture that is part of the Wikipedia article on the HTC Dream, the first Android device. From a hardware point of view, change has also been remarkable. The first Android device was launched with 192 MB of RAM compared to the 1 or 2 GB of memory high end devices feature today. Mobile processors have evolved from a 500 MHz single core architecture to 1 to 2 GHz dual or quad core architectures with much improved processor design. Mobile GPU capabilities have risen even more dramatically and the original 320×480 screen resolution is at best only found in very low end mobile devices today.

The point I want to make with this comparison: There has surely been a lot of innovation in the PC and notebook sector but devices bought 5 years ago are still in service today and work well on a 5 year old operating system version that still dominates the market. In the mobile space the pace was much quicker and smartphones bought 5 years ago are nowhere to be seen anymore as capabilities of current devices have improved so much that people were willing to upgrade at least once or twice to a new device during that timeframe.

This makes me wonder if we'll see the same innovation speed in mobile in the next 5 years or whether it will slow to a rate similar to what can been seen in the desktop/notebook market. And if this is the case will there be a "next big thing" during that timeframe?

Some Thoughts on Paid Peering, Who Pays Whom and Why

In a previous post I've given an introduction to the different kind of interconnections between different networks that form the Internet: Transit, Peering and Paid Peering. In this post I'd like to put down my notes on Paid Peering and who pays whom for what:

Paid Peering is used, for example, between access networks and content delivery networks or the content companies themselves, with the content side paying the access networks for the privilege to connect directly. From what I can tell, content providers used to pay content distribution networks such as Akamai to store their content closer to the subscribers and to deliver it from there. In turn Akamai paid for peering to the access networks. At some point some content providers started to build their own distribution networks and hence wanted to directly peer with access networks. In some cases they got this peering for free, especially from smaller access network providers because they could not risk not offering the content to their subscribers. Also, free peering to the content provider was/is probably be cheaper for them then to get this data over a Transit link for which they have to pay.

The balance of power is different though when a larger access network operator comes into play as they argue that the content provider should pay for the peering as that was also the way it was done before when a content distribution network was between them and the content. The prime reason given for this is that they have to invest in their own network to transport the rising amount of video content and hence should be reimbursed by the content companies. The interesting part is the discrepancy to the small access network operators which seem to do just fine without this cross financing. In other words, paid peering between access network operator and content company is an interesting way to create monopolies that can be exploited when it comes to content heavy applications.

Due to this it is easy to confuse paid peering and network neutrality as is frequently done in the press. Net neutrality requires all packets to be forwarded with equal priority while paid peering regulates who pays whom for a connection. In other words, an access network operator can be as network neutral as it wants and still get money from the content provider via paid peering.

For those who want to follow this train of thought I can recommend Dean Bubley's recent blog post on why 'AT&T's shrill anti-neutrality stance is dangerous'.

Were My Raspberry Servers Heartbleed Vulnerable?

Last week, I patched my Raspberry Pi based web servers in a hurry to make sure they are not vulnerable to a Heartbleed attack anymore. I decided to do this quickly as a check of the Openssl library on my servers showed that a vulnerable version was installed. What I couldn't check at the time was if my web servers actually used the library for SSL encryption. I only later discovered that there were tools available to do just that but by then my servers were already patched. So after returning home from a business trip I decided that I wanted to know.

I frequently create full backups of my servers which is pretty simple with Raspberry Pis as SD cards are used as storage medium. These can be cloned to a backup file and restored to a SD card later on with a simple 'dd' command. As expected the installation was vulnerable to Heartbleed. The whole exercise took less than 30 minutes of which 20 minutes were spent by waiting for the dd command to finish the restore to the SD card. Pretty cool timing for making a full server restore.