The Selfoss Advantage – Full Articles Instead of RSS Stumps

And here's another post on Selfoss, my RSS server at home that aggregates the news and posts from the websites I'm interested in: What I only realized when going through the code is that for a number of predefined news websites, the server does not only grab the content of the RSS feed but actually downloads the text of the full article linked in the RSS feed and inserts it in my reading list. That's particularly useful when I read my news stream on a mobile device as especially in this scenario I'm not fond of having to open up new tabs and having to wait for a full desktop size web page to download. A really nice feature!

Unfortunately this functionality requires dedicated code for each web site as all the 'custom clutter' around an article's text needs to be removed. But again, open source shines in this regard. As there are a few web sites on my reading list that only offer text stubs in their RSS feed and overblown web pages that take too long to download I decided to something about it. So I recently spent some time extending Selfoss a bit to expand to the full text content for those feeds. Once done and tested I learned how to use GIT and Github and offered my code via a 'pull request' to the Selfoss project. And indeed, my code will be part of version 2.11 to be published soon.

I really like open source 🙂

I Can’t Reach My Server – The Internet Must Be Down

There is a running gag between me and my fellow engineers I work with, that if the website of one of the most popular tech magazines in Germany can't be reached that the whole Internet must be down. This is because whenever one of us wants to check if there is Internet connectivity we type in the URL of this website to see if we can reach the site as it is (almost) always reachable.

So far so good. Recently, I was reminded of this running gag when I was in Taiwan and wanted to reach one of my servers at home and got no response either via the DSL line they are connected to nor via the LTE backup link. A quick interrogation of my GSM power socket via SMS revealed that there was no power outage either. So what could be the reason for this!?

As a next step I performed a traceroute and noticed that up to the edge of my provider's network in Germany, everything was working. After that, however, responses stopped coming in. So indeed for about half an hour the fixed line and wireless network of one of Germany's largest network operators was not reachable from the outside. Few probably noticed as it was 3 am local time. As I was in Taipei, however, it was 9 am for me and I did notice.

I wonder what will happen next time I travel!? I've had a DSL outage before while I was traveling, a city wide power outage interrupted communication last December when I was on vacation, a power outage caused by construction work while I was on vacation and now a backbone router outage on another trip. And whenever I think that I can't imagine anything else, reality shows me another possibility.

Welcome to the connected world!

Shell Windows, DOS Boxes and Real Teletypes

If you are old enough, you might remember that 'shell' windows and 'DOS boxes' on graphical user interfaces are a virtualization of physical terminals that were connected over a serial interface to a mainframe computer. Sounds pretty ancient today, doesn't it!? But actually that wasn't the beginning as those hardware terminals with keyboards and screens were themselves a virtualization of the initial input output device connected to a computer over a serial interface: A mechanical teletype machine and paper tape reader.

When I recently did some history research I came across this video on YouTube that shows how an original teletype machine was used before the days of hardware terminals with screens to load programs from paper tape into an Altair 8800, the first personal computer. Once the program is loaded the teletype is then used to type-in instructions and programs and see the result printed out on paper. One thing is for sure. After watching this video you'll never quite look at a shell window and a blinking cursor as before.

Wikipedia LTE Band Sorter

A quick entry today for a feature I've just found that I find incredibly helpful. As those of you involved in LTE know, the LTE band numbers were assigned in order of request and not the frequency range that is covered by a band number. In other words, tables that are sorted on the band number jump widely through the frequencies. Quite a number of times I wished for a list that is sorted by frequency and not band number, e.g. to see overlaps or proximity. It turns out that Wikipedia just has that option. Have a look at the LTE band table in the E-UTRA entry: In the header section there are small up- and down arrows for each column so the table can be sorted on any column. How neat!

Race To Sleep

I'm not actually sure who coined the term 'Race to Sleep' but I seem to hear it more often these days.

The idea behind it is to speed up an operation to be able to go into a very low power sleep state quicker after the operation at the expense of a higher peak power requirement during the operation itself. When 'Race to Sleep' works the overall energy required for the the faster execution + longer sleep time (as a reward) is lower compared to a previous architecture in which the operation took longer with less peak power drawn but a shorter sleep time. The 'operation' can be just about anything: Raw computing power, more complexity to speed up data transmission, GPU power, etc.

Does this really work in practice or is it just a myth? It seems it can work and AnandTech wrote a very detailed post on this phenomenon comparing power consumption for the same operations between a mobile device and its successor version. Have a look here for the details.

But at the end of the post he also notes that in practice, the gain when for example downloading and rendering a web page faster with higher power requirements and then make up for it by being in a sleep state for a longer time than before may be eaten quickly by users browsing the web more quickly because pages are loaded more quickly and thus they can start scrolling earlier.

So perhaps 'Race to Sleep' is most effective when a task that is sped up does not result in extra power expenditure later on due to the user being able to interact with a device even more quickly than before.

Change In the Past 5 Years – PC vs. Mobile

When I look back 5 years I noticed that the speed of change in the PC sector is quite different from what happened in mobile. Going back 5 years to the 2008/2009 timeframe, Windows Vista was the dominating (but not very much loved) operating system and 2009 saw the launch of the not so different (but much more loved) Windows 7 that still dominates the PC today. Also, I still use the notebook I bought back in 2008 for what it was intended at the time, as a desktop PC replacement. It has a dual core Intel Centrino CPU, 4 GB RAM and a 256 GB hard disk. Performance wise it plays DVDs and streams video content just as my latest and greatest notebook does. From a user input response point of view it doesn't feel any different in terms of speed to the machine I mostly use today. This switch, however, was not made because the machine has become inadequate performance wise but because it was bought as a desktop replacement without mobility in mind that I need today.

It's not that there haven't been advances in the technology in the past 5 years in this sector but they pale in comparison to what happened in mobile. Back in the 2008/09 timeframe, Symbian and Windows Mobile were the dominating operating systems at the time. While Windows 7 is still alive and kicking on the desktop, those two mobile operating systems are pretty much extinct by now having been replaced by mobile operating systems such as the Linux based Android OS that launched in the 2008. When you think about how Android looked then and what it's capabilities were and compare it to today the difference is truly remarkable. If you don't remember how the first Android looked like, have a look at the picture that is part of the Wikipedia article on the HTC Dream, the first Android device. From a hardware point of view, change has also been remarkable. The first Android device was launched with 192 MB of RAM compared to the 1 or 2 GB of memory high end devices feature today. Mobile processors have evolved from a 500 MHz single core architecture to 1 to 2 GHz dual or quad core architectures with much improved processor design. Mobile GPU capabilities have risen even more dramatically and the original 320×480 screen resolution is at best only found in very low end mobile devices today.

The point I want to make with this comparison: There has surely been a lot of innovation in the PC and notebook sector but devices bought 5 years ago are still in service today and work well on a 5 year old operating system version that still dominates the market. In the mobile space the pace was much quicker and smartphones bought 5 years ago are nowhere to be seen anymore as capabilities of current devices have improved so much that people were willing to upgrade at least once or twice to a new device during that timeframe.

This makes me wonder if we'll see the same innovation speed in mobile in the next 5 years or whether it will slow to a rate similar to what can been seen in the desktop/notebook market. And if this is the case will there be a "next big thing" during that timeframe?

Some Thoughts on Paid Peering, Who Pays Whom and Why

In a previous post I've given an introduction to the different kind of interconnections between different networks that form the Internet: Transit, Peering and Paid Peering. In this post I'd like to put down my notes on Paid Peering and who pays whom for what:

Paid Peering is used, for example, between access networks and content delivery networks or the content companies themselves, with the content side paying the access networks for the privilege to connect directly. From what I can tell, content providers used to pay content distribution networks such as Akamai to store their content closer to the subscribers and to deliver it from there. In turn Akamai paid for peering to the access networks. At some point some content providers started to build their own distribution networks and hence wanted to directly peer with access networks. In some cases they got this peering for free, especially from smaller access network providers because they could not risk not offering the content to their subscribers. Also, free peering to the content provider was/is probably be cheaper for them then to get this data over a Transit link for which they have to pay.

The balance of power is different though when a larger access network operator comes into play as they argue that the content provider should pay for the peering as that was also the way it was done before when a content distribution network was between them and the content. The prime reason given for this is that they have to invest in their own network to transport the rising amount of video content and hence should be reimbursed by the content companies. The interesting part is the discrepancy to the small access network operators which seem to do just fine without this cross financing. In other words, paid peering between access network operator and content company is an interesting way to create monopolies that can be exploited when it comes to content heavy applications.

Due to this it is easy to confuse paid peering and network neutrality as is frequently done in the press. Net neutrality requires all packets to be forwarded with equal priority while paid peering regulates who pays whom for a connection. In other words, an access network operator can be as network neutral as it wants and still get money from the content provider via paid peering.

For those who want to follow this train of thought I can recommend Dean Bubley's recent blog post on why 'AT&T's shrill anti-neutrality stance is dangerous'.

Were My Raspberry Servers Heartbleed Vulnerable?

Last week, I patched my Raspberry Pi based web servers in a hurry to make sure they are not vulnerable to a Heartbleed attack anymore. I decided to do this quickly as a check of the Openssl library on my servers showed that a vulnerable version was installed. What I couldn't check at the time was if my web servers actually used the library for SSL encryption. I only later discovered that there were tools available to do just that but by then my servers were already patched. So after returning home from a business trip I decided that I wanted to know.

I frequently create full backups of my servers which is pretty simple with Raspberry Pis as SD cards are used as storage medium. These can be cloned to a backup file and restored to a SD card later on with a simple 'dd' command. As expected the installation was vulnerable to Heartbleed. The whole exercise took less than 30 minutes of which 20 minutes were spent by waiting for the dd command to finish the restore to the SD card. Pretty cool timing for making a full server restore.

Who Pays Whom?: User – DSL Provider – Transit – Video Portal – Reloaded

About a year ago I had a post under the same title in which I tried to figure out who paid whom on the Internet. At the time I got a lot of responses with insightful information and while those helped to form a better picture I was still a bit at a loss what was going in. Then recently, a co-worker sent me a link to a book on the topic (thanks Christian!) – 'The Internet Peering Playbook : Connecting to the Core of the Internet'. The epub and pdf version is available for $10. Needless to say I could not resist and ordered a copy via a Paypal transfer. An hour later I had the ebook in my (virtual) hands and began to read eagerly.
The book is a joy to read and I managed to get through the first half which contained the information I was mainly interested in in a couple of hours. There are many examples that translate theory into practice and here are some notes that are my takeaway. Perhaps they make sense to you as well despite their brevity or perhaps it's a trigger to find out more. So here we go:

To better understand how the different networks that the Internet is comprised of connect with each other, one has to be aware of the different kind of connection types:

Internet Transit: The first type that I've also mentioned in my blog post a year ago is a 'Transit' connection in which one party, e.g. the DSL/cable access network provider, pays a backbone network for connectivity to the rest of the Internet. Transit routes are the 'default' route to which everything is routed to and from that can't be sent and received from any other network interface. Typically, DSL and cable providers pay for such connectivity and prices in 2014 are typically in the area of  the tens of cents per Megabit per second.

Peering: The second type of connectivity is referred to as 'Peering'. Peering is usually employed in the backbone between two backbone networks that transmit and receive about the same amount of data to each other. As the traffic is about equal in each direction, no monetary compensation is exchanged between the two parties, it's a deal among equals. Instead, each part pays the costs for its side of the interconnection. Usually an Internet Exchange Point (IXP) to which many dozens of networks connect is used for this purpose. Once two networks that have a link to an IXP agree to connect, a peering connection can be set up by establishing a route through the public IPX packet exchange matrix between the two optical ports of the two networks. It's also possible to physically connect the two networks together with a dedicated link in the IPX which is called private peering. It's also common that two networks decide to peer at more than a single IXP location. Whether two networks peer with each other or if one of the parties pays for transit (to another backbone network) to reach that network seems to be not only a matter of an equal amount of data exchanged but also of psychology. The book contains interesting examples of the tactics employed by peering managers to move from transit to a network (via another network that is paid for the transit) to a direct peering connection.

Paid Peering: The third type is 'Paid Peering'. In this variant, two networks decide to interconnect but unlike the normal peering described above, one party pays the other for the connection. Paid Peering is different from Transit because while Transit provides a default route to the Internet, Paid Peering only offers routes between the two networks and potentially subnets which are paying for Transit to those networks.

There we go, that's the three interconnection types that exist in practice. In a follow up blog post I'll focus on Paid Peering, Who Pays Whom and Why. Stay tuned…

What If the NSA Did NOT Know Of Heartbleed?

The last couple of days of security news has been interesting to say the least. Heartblead has become a headline even in non-tech circles. Now that it has been established how dangerous the bug is, how simple it is to weaponize it and how easy it is to find in the code that is publicly available, a facet of the discussion focuses on whether the NSA (and other spy agencies) have known about it and for how long. Unsurprisingly the NSA denies prior knowledge and as unsurprisingly there are only few who believe them.

What I find interesting in the discussion is, that nobody has asked so far what it would mean if the NSA really didn't know about Heatbleed!?

I would assume that with a budget of billions of dollars annually they must have hoards of programs who's only job it is to find weaknesses in open source code that is publicly available by nature. In other words they must have stumbled over it unless they are totally incompetent. This is nothing that hid deep deep inside the code, this bug is so obvious to someone specifically looking for weaknesses in code that this must have been an instant find.

So the NSA is damned one way or the other. If they did find the bug, did not report and then lied about it, they put everyone at risk even their own industry because it is absolutely obvious that this but is easy to find for other spy agencies as well. And if they didn't find it on the other hand, as they claim, one has to wonder what they spend all those billions of dollars on annually…