Race To Sleep

I'm not actually sure who coined the term 'Race to Sleep' but I seem to hear it more often these days.

The idea behind it is to speed up an operation to be able to go into a very low power sleep state quicker after the operation at the expense of a higher peak power requirement during the operation itself. When 'Race to Sleep' works the overall energy required for the the faster execution + longer sleep time (as a reward) is lower compared to a previous architecture in which the operation took longer with less peak power drawn but a shorter sleep time. The 'operation' can be just about anything: Raw computing power, more complexity to speed up data transmission, GPU power, etc.

Does this really work in practice or is it just a myth? It seems it can work and AnandTech wrote a very detailed post on this phenomenon comparing power consumption for the same operations between a mobile device and its successor version. Have a look here for the details.

But at the end of the post he also notes that in practice, the gain when for example downloading and rendering a web page faster with higher power requirements and then make up for it by being in a sleep state for a longer time than before may be eaten quickly by users browsing the web more quickly because pages are loaded more quickly and thus they can start scrolling earlier.

So perhaps 'Race to Sleep' is most effective when a task that is sped up does not result in extra power expenditure later on due to the user being able to interact with a device even more quickly than before.

Change In the Past 5 Years – PC vs. Mobile

When I look back 5 years I noticed that the speed of change in the PC sector is quite different from what happened in mobile. Going back 5 years to the 2008/2009 timeframe, Windows Vista was the dominating (but not very much loved) operating system and 2009 saw the launch of the not so different (but much more loved) Windows 7 that still dominates the PC today. Also, I still use the notebook I bought back in 2008 for what it was intended at the time, as a desktop PC replacement. It has a dual core Intel Centrino CPU, 4 GB RAM and a 256 GB hard disk. Performance wise it plays DVDs and streams video content just as my latest and greatest notebook does. From a user input response point of view it doesn't feel any different in terms of speed to the machine I mostly use today. This switch, however, was not made because the machine has become inadequate performance wise but because it was bought as a desktop replacement without mobility in mind that I need today.

It's not that there haven't been advances in the technology in the past 5 years in this sector but they pale in comparison to what happened in mobile. Back in the 2008/09 timeframe, Symbian and Windows Mobile were the dominating operating systems at the time. While Windows 7 is still alive and kicking on the desktop, those two mobile operating systems are pretty much extinct by now having been replaced by mobile operating systems such as the Linux based Android OS that launched in the 2008. When you think about how Android looked then and what it's capabilities were and compare it to today the difference is truly remarkable. If you don't remember how the first Android looked like, have a look at the picture that is part of the Wikipedia article on the HTC Dream, the first Android device. From a hardware point of view, change has also been remarkable. The first Android device was launched with 192 MB of RAM compared to the 1 or 2 GB of memory high end devices feature today. Mobile processors have evolved from a 500 MHz single core architecture to 1 to 2 GHz dual or quad core architectures with much improved processor design. Mobile GPU capabilities have risen even more dramatically and the original 320×480 screen resolution is at best only found in very low end mobile devices today.

The point I want to make with this comparison: There has surely been a lot of innovation in the PC and notebook sector but devices bought 5 years ago are still in service today and work well on a 5 year old operating system version that still dominates the market. In the mobile space the pace was much quicker and smartphones bought 5 years ago are nowhere to be seen anymore as capabilities of current devices have improved so much that people were willing to upgrade at least once or twice to a new device during that timeframe.

This makes me wonder if we'll see the same innovation speed in mobile in the next 5 years or whether it will slow to a rate similar to what can been seen in the desktop/notebook market. And if this is the case will there be a "next big thing" during that timeframe?

Some Thoughts on Paid Peering, Who Pays Whom and Why

In a previous post I've given an introduction to the different kind of interconnections between different networks that form the Internet: Transit, Peering and Paid Peering. In this post I'd like to put down my notes on Paid Peering and who pays whom for what:

Paid Peering is used, for example, between access networks and content delivery networks or the content companies themselves, with the content side paying the access networks for the privilege to connect directly. From what I can tell, content providers used to pay content distribution networks such as Akamai to store their content closer to the subscribers and to deliver it from there. In turn Akamai paid for peering to the access networks. At some point some content providers started to build their own distribution networks and hence wanted to directly peer with access networks. In some cases they got this peering for free, especially from smaller access network providers because they could not risk not offering the content to their subscribers. Also, free peering to the content provider was/is probably be cheaper for them then to get this data over a Transit link for which they have to pay.

The balance of power is different though when a larger access network operator comes into play as they argue that the content provider should pay for the peering as that was also the way it was done before when a content distribution network was between them and the content. The prime reason given for this is that they have to invest in their own network to transport the rising amount of video content and hence should be reimbursed by the content companies. The interesting part is the discrepancy to the small access network operators which seem to do just fine without this cross financing. In other words, paid peering between access network operator and content company is an interesting way to create monopolies that can be exploited when it comes to content heavy applications.

Due to this it is easy to confuse paid peering and network neutrality as is frequently done in the press. Net neutrality requires all packets to be forwarded with equal priority while paid peering regulates who pays whom for a connection. In other words, an access network operator can be as network neutral as it wants and still get money from the content provider via paid peering.

For those who want to follow this train of thought I can recommend Dean Bubley's recent blog post on why 'AT&T's shrill anti-neutrality stance is dangerous'.

Were My Raspberry Servers Heartbleed Vulnerable?

Last week, I patched my Raspberry Pi based web servers in a hurry to make sure they are not vulnerable to a Heartbleed attack anymore. I decided to do this quickly as a check of the Openssl library on my servers showed that a vulnerable version was installed. What I couldn't check at the time was if my web servers actually used the library for SSL encryption. I only later discovered that there were tools available to do just that but by then my servers were already patched. So after returning home from a business trip I decided that I wanted to know.

I frequently create full backups of my servers which is pretty simple with Raspberry Pis as SD cards are used as storage medium. These can be cloned to a backup file and restored to a SD card later on with a simple 'dd' command. As expected the installation was vulnerable to Heartbleed. The whole exercise took less than 30 minutes of which 20 minutes were spent by waiting for the dd command to finish the restore to the SD card. Pretty cool timing for making a full server restore.

Who Pays Whom?: User – DSL Provider – Transit – Video Portal – Reloaded

About a year ago I had a post under the same title in which I tried to figure out who paid whom on the Internet. At the time I got a lot of responses with insightful information and while those helped to form a better picture I was still a bit at a loss what was going in. Then recently, a co-worker sent me a link to a book on the topic (thanks Christian!) – 'The Internet Peering Playbook : Connecting to the Core of the Internet'. The epub and pdf version is available for $10. Needless to say I could not resist and ordered a copy via a Paypal transfer. An hour later I had the ebook in my (virtual) hands and began to read eagerly.
The book is a joy to read and I managed to get through the first half which contained the information I was mainly interested in in a couple of hours. There are many examples that translate theory into practice and here are some notes that are my takeaway. Perhaps they make sense to you as well despite their brevity or perhaps it's a trigger to find out more. So here we go:

To better understand how the different networks that the Internet is comprised of connect with each other, one has to be aware of the different kind of connection types:

Internet Transit: The first type that I've also mentioned in my blog post a year ago is a 'Transit' connection in which one party, e.g. the DSL/cable access network provider, pays a backbone network for connectivity to the rest of the Internet. Transit routes are the 'default' route to which everything is routed to and from that can't be sent and received from any other network interface. Typically, DSL and cable providers pay for such connectivity and prices in 2014 are typically in the area of  the tens of cents per Megabit per second.

Peering: The second type of connectivity is referred to as 'Peering'. Peering is usually employed in the backbone between two backbone networks that transmit and receive about the same amount of data to each other. As the traffic is about equal in each direction, no monetary compensation is exchanged between the two parties, it's a deal among equals. Instead, each part pays the costs for its side of the interconnection. Usually an Internet Exchange Point (IXP) to which many dozens of networks connect is used for this purpose. Once two networks that have a link to an IXP agree to connect, a peering connection can be set up by establishing a route through the public IPX packet exchange matrix between the two optical ports of the two networks. It's also possible to physically connect the two networks together with a dedicated link in the IPX which is called private peering. It's also common that two networks decide to peer at more than a single IXP location. Whether two networks peer with each other or if one of the parties pays for transit (to another backbone network) to reach that network seems to be not only a matter of an equal amount of data exchanged but also of psychology. The book contains interesting examples of the tactics employed by peering managers to move from transit to a network (via another network that is paid for the transit) to a direct peering connection.

Paid Peering: The third type is 'Paid Peering'. In this variant, two networks decide to interconnect but unlike the normal peering described above, one party pays the other for the connection. Paid Peering is different from Transit because while Transit provides a default route to the Internet, Paid Peering only offers routes between the two networks and potentially subnets which are paying for Transit to those networks.

There we go, that's the three interconnection types that exist in practice. In a follow up blog post I'll focus on Paid Peering, Who Pays Whom and Why. Stay tuned…

What If the NSA Did NOT Know Of Heartbleed?

The last couple of days of security news has been interesting to say the least. Heartblead has become a headline even in non-tech circles. Now that it has been established how dangerous the bug is, how simple it is to weaponize it and how easy it is to find in the code that is publicly available, a facet of the discussion focuses on whether the NSA (and other spy agencies) have known about it and for how long. Unsurprisingly the NSA denies prior knowledge and as unsurprisingly there are only few who believe them.

What I find interesting in the discussion is, that nobody has asked so far what it would mean if the NSA really didn't know about Heatbleed!?

I would assume that with a budget of billions of dollars annually they must have hoards of programs who's only job it is to find weaknesses in open source code that is publicly available by nature. In other words they must have stumbled over it unless they are totally incompetent. This is nothing that hid deep deep inside the code, this bug is so obvious to someone specifically looking for weaknesses in code that this must have been an instant find.

So the NSA is damned one way or the other. If they did find the bug, did not report and then lied about it, they put everyone at risk even their own industry because it is absolutely obvious that this but is easy to find for other spy agencies as well. And if they didn't find it on the other hand, as they claim, one has to wonder what they spend all those billions of dollars on annually…

When A Book Still Trumps Online Search

Search engines have revolutionized the way many people including myself learn new things. Before the days of the Internet, working on a specific programming problem sometimes meant reading manuals and trying out a lot of different things before getting things working. These days a quick search on the Internet usually reveals several solutions to chose from. While this works well especially for 'smalls scale' problems I still feel much more comfortable reading a book when things get more complex.

Take my recent adventures with Apache, PHP and MySQL for example. I am sure I could have learnt a lot by just using online resources and 'googeling' my way through it but there are quite a number of things that have to fall into place at the beginning, e.g. setting up the Eclipse development environment, installing XAMPP, getting the PHP debugger up and running, to learn how PHP applies the concepts of functional and object oriented PHP programming, to learn how to work with SQL databases in this environment, etc. etc. As the combination of these things go far beyond what a single online search could return I decided to pick up a book that shows me how to do these things step by step instead of trying to piece things together on my own.

For this particular adventure I decided to go for the 'PHP any MySQL 24 Hour Trainer' book by Andrea Tarr. While already written back in 2011, it's about Apache, PHP and MySQL basics and these haven't changed very much since then. The book is available in both print and ebook edition and while I went for the printed edition I think the ebook version would have worked equally well for me. In other words, book vs. online search is not about offline vs. online, it's more about having all information required in a single place.

It's interesting to observe that at some point a cutover occurs in the learning process: Once the basic things are in place and the problem space becomes narrow enough it becomes easier to use an online search engine to find answers to topics rather then to explore the topic in a book. A perfect symbiosis I would say.

My Raspberry Pi Servers and Heartbleed

Unless you've been living behind the moon in the past 24 hours you've probably heard about 'Heartbleed', the latest and greatest secure http vulnerability that Bruce Schneier gives it an 11 on a scale from 1 to 10. Indeed, it's as bad as it can get.

As I have a number of (Debian based) Raspberry Pi servers on which I host my Owncloud, Selfoss and a couple of other things I was of also affected and scrambled to get my shields back up. Fortunately the guys at Raspberry reacted quickly and offered the SSL fix in the Raspian repository quickly. Once that was done I got a new SSL certificate for my domain name, distributed it to my servers and then updated all my passwords used on those systems. Two hours later… and I'm done.

And here's two quotes from Bruce's blog that make quite clear of how bad the situation really is:

"At this point, the odds are close to one that every target has had its private keys extracted by multiple intelligence agencies."

and

"The real question is whether or not someone deliberately inserted this bug into OpenSSL"

I'm looking forward to the investigation who's responsible for the bug. As 'libssl' is open source it should be possible to find out who modified that piece of code in 2011.

The Joy Of Open Source: You Can Fix It Yourself

Over the past months I've learnt a lot about Apache, PHP and MySQL in my spare time as I wanted to implement a database application with a web front end for my own purposes. While the effort would have probably been too high just for this, I was additionally motivated by the fact that learning about these tools also gives me a deeper understanding of how web based services work under the hood.

Databases have always been my weak point as I had little use for them so far. After my project I have gained a much better understanding about MySQL and SQLite and feel comfortable working with them. What a nice side effect.

And in addition, the knowledge gained helps me to better understand, debug and fix issues of open source web based applications I am using on a regular basis. A practical example is Selfoss, my RSS aggregator of choice I've been using ever since Google decided to shut down their RSS reader product last year. While I am more than happy with it, the feed update procedure stops working every couple of months for a while. When it happened again recently I dug a bit deeper with the knowledge I have gained and found out that the root cause were links to non-existing web pages that the update process tried to use. Failing to load these pages resulted in an abort of the update process. A few lines of code and the issue was fixed.

I guess it's time to learn about 'git' now so I can not only fix the issue locally and report it to the developer but also supply a fix for it and potentially further features I'm developing for it. Open source at its best!

Blog – Firefox Shows Https Encryption Algorithm

Firefox-encryption-infoBy chance I recently noticed that Firefox at some point has started to show detailed information on the encryption and key negotiation method that is used for a https connection. As I summarized in this post, there are many methods in practice offering different levels of security and I very much like that Wireshark tracing is no longer required to find out what level of security a connection offers.