Who Pays Whom?: User – DSL Provider – Transit – Video Portal – Reloaded

About a year ago I had a post under the same title in which I tried to figure out who paid whom on the Internet. At the time I got a lot of responses with insightful information and while those helped to form a better picture I was still a bit at a loss what was going in. Then recently, a co-worker sent me a link to a book on the topic (thanks Christian!) – 'The Internet Peering Playbook : Connecting to the Core of the Internet'. The epub and pdf version is available for $10. Needless to say I could not resist and ordered a copy via a Paypal transfer. An hour later I had the ebook in my (virtual) hands and began to read eagerly.
The book is a joy to read and I managed to get through the first half which contained the information I was mainly interested in in a couple of hours. There are many examples that translate theory into practice and here are some notes that are my takeaway. Perhaps they make sense to you as well despite their brevity or perhaps it's a trigger to find out more. So here we go:

To better understand how the different networks that the Internet is comprised of connect with each other, one has to be aware of the different kind of connection types:

Internet Transit: The first type that I've also mentioned in my blog post a year ago is a 'Transit' connection in which one party, e.g. the DSL/cable access network provider, pays a backbone network for connectivity to the rest of the Internet. Transit routes are the 'default' route to which everything is routed to and from that can't be sent and received from any other network interface. Typically, DSL and cable providers pay for such connectivity and prices in 2014 are typically in the area of  the tens of cents per Megabit per second.

Peering: The second type of connectivity is referred to as 'Peering'. Peering is usually employed in the backbone between two backbone networks that transmit and receive about the same amount of data to each other. As the traffic is about equal in each direction, no monetary compensation is exchanged between the two parties, it's a deal among equals. Instead, each part pays the costs for its side of the interconnection. Usually an Internet Exchange Point (IXP) to which many dozens of networks connect is used for this purpose. Once two networks that have a link to an IXP agree to connect, a peering connection can be set up by establishing a route through the public IPX packet exchange matrix between the two optical ports of the two networks. It's also possible to physically connect the two networks together with a dedicated link in the IPX which is called private peering. It's also common that two networks decide to peer at more than a single IXP location. Whether two networks peer with each other or if one of the parties pays for transit (to another backbone network) to reach that network seems to be not only a matter of an equal amount of data exchanged but also of psychology. The book contains interesting examples of the tactics employed by peering managers to move from transit to a network (via another network that is paid for the transit) to a direct peering connection.

Paid Peering: The third type is 'Paid Peering'. In this variant, two networks decide to interconnect but unlike the normal peering described above, one party pays the other for the connection. Paid Peering is different from Transit because while Transit provides a default route to the Internet, Paid Peering only offers routes between the two networks and potentially subnets which are paying for Transit to those networks.

There we go, that's the three interconnection types that exist in practice. In a follow up blog post I'll focus on Paid Peering, Who Pays Whom and Why. Stay tuned…

What If the NSA Did NOT Know Of Heartbleed?

The last couple of days of security news has been interesting to say the least. Heartblead has become a headline even in non-tech circles. Now that it has been established how dangerous the bug is, how simple it is to weaponize it and how easy it is to find in the code that is publicly available, a facet of the discussion focuses on whether the NSA (and other spy agencies) have known about it and for how long. Unsurprisingly the NSA denies prior knowledge and as unsurprisingly there are only few who believe them.

What I find interesting in the discussion is, that nobody has asked so far what it would mean if the NSA really didn't know about Heatbleed!?

I would assume that with a budget of billions of dollars annually they must have hoards of programs who's only job it is to find weaknesses in open source code that is publicly available by nature. In other words they must have stumbled over it unless they are totally incompetent. This is nothing that hid deep deep inside the code, this bug is so obvious to someone specifically looking for weaknesses in code that this must have been an instant find.

So the NSA is damned one way or the other. If they did find the bug, did not report and then lied about it, they put everyone at risk even their own industry because it is absolutely obvious that this but is easy to find for other spy agencies as well. And if they didn't find it on the other hand, as they claim, one has to wonder what they spend all those billions of dollars on annually…

When A Book Still Trumps Online Search

Search engines have revolutionized the way many people including myself learn new things. Before the days of the Internet, working on a specific programming problem sometimes meant reading manuals and trying out a lot of different things before getting things working. These days a quick search on the Internet usually reveals several solutions to chose from. While this works well especially for 'smalls scale' problems I still feel much more comfortable reading a book when things get more complex.

Take my recent adventures with Apache, PHP and MySQL for example. I am sure I could have learnt a lot by just using online resources and 'googeling' my way through it but there are quite a number of things that have to fall into place at the beginning, e.g. setting up the Eclipse development environment, installing XAMPP, getting the PHP debugger up and running, to learn how PHP applies the concepts of functional and object oriented PHP programming, to learn how to work with SQL databases in this environment, etc. etc. As the combination of these things go far beyond what a single online search could return I decided to pick up a book that shows me how to do these things step by step instead of trying to piece things together on my own.

For this particular adventure I decided to go for the 'PHP any MySQL 24 Hour Trainer' book by Andrea Tarr. While already written back in 2011, it's about Apache, PHP and MySQL basics and these haven't changed very much since then. The book is available in both print and ebook edition and while I went for the printed edition I think the ebook version would have worked equally well for me. In other words, book vs. online search is not about offline vs. online, it's more about having all information required in a single place.

It's interesting to observe that at some point a cutover occurs in the learning process: Once the basic things are in place and the problem space becomes narrow enough it becomes easier to use an online search engine to find answers to topics rather then to explore the topic in a book. A perfect symbiosis I would say.

My Raspberry Pi Servers and Heartbleed

Unless you've been living behind the moon in the past 24 hours you've probably heard about 'Heartbleed', the latest and greatest secure http vulnerability that Bruce Schneier gives it an 11 on a scale from 1 to 10. Indeed, it's as bad as it can get.

As I have a number of (Debian based) Raspberry Pi servers on which I host my Owncloud, Selfoss and a couple of other things I was of also affected and scrambled to get my shields back up. Fortunately the guys at Raspberry reacted quickly and offered the SSL fix in the Raspian repository quickly. Once that was done I got a new SSL certificate for my domain name, distributed it to my servers and then updated all my passwords used on those systems. Two hours later… and I'm done.

And here's two quotes from Bruce's blog that make quite clear of how bad the situation really is:

"At this point, the odds are close to one that every target has had its private keys extracted by multiple intelligence agencies."

and

"The real question is whether or not someone deliberately inserted this bug into OpenSSL"

I'm looking forward to the investigation who's responsible for the bug. As 'libssl' is open source it should be possible to find out who modified that piece of code in 2011.

The Joy Of Open Source: You Can Fix It Yourself

Over the past months I've learnt a lot about Apache, PHP and MySQL in my spare time as I wanted to implement a database application with a web front end for my own purposes. While the effort would have probably been too high just for this, I was additionally motivated by the fact that learning about these tools also gives me a deeper understanding of how web based services work under the hood.

Databases have always been my weak point as I had little use for them so far. After my project I have gained a much better understanding about MySQL and SQLite and feel comfortable working with them. What a nice side effect.

And in addition, the knowledge gained helps me to better understand, debug and fix issues of open source web based applications I am using on a regular basis. A practical example is Selfoss, my RSS aggregator of choice I've been using ever since Google decided to shut down their RSS reader product last year. While I am more than happy with it, the feed update procedure stops working every couple of months for a while. When it happened again recently I dug a bit deeper with the knowledge I have gained and found out that the root cause were links to non-existing web pages that the update process tried to use. Failing to load these pages resulted in an abort of the update process. A few lines of code and the issue was fixed.

I guess it's time to learn about 'git' now so I can not only fix the issue locally and report it to the developer but also supply a fix for it and potentially further features I'm developing for it. Open source at its best!

Blog – Firefox Shows Https Encryption Algorithm

Firefox-encryption-infoBy chance I recently noticed that Firefox at some point has started to show detailed information on the encryption and key negotiation method that is used for a https connection. As I summarized in this post, there are many methods in practice offering different levels of security and I very much like that Wireshark tracing is no longer required to find out what level of security a connection offers.

T-Mobile USA Joins Others In the 700 MHz Band

A couple of days ago I read in this news report that T-Mobile USA has acquired some lower A-Block 700 MHz spectrum from Verizon it intends to use for LTE in the near future. To me as an outsider, US spectrum trading just leaves me baffled as network operators keep exchanging spectrum for spectrum and/or money. To me it looks a bit like the 'shell game' (minus the fraud part of course…).

Anyway, last year I did an analysis of how the 700 MHz band is split up in the US to get a better picture and you can find the details here. According to my hand drawn image and the more professionally done drawings in one of the articles that are linked in the post there's only 2×5 MHz left in the lower A-band, the famously unused lower part of band 12. Again the articles linked in my post last year give some more insight into why band 12 overlaps with band 17. Also, this Wikipedia article lists the company for band 12/17.

So these days it looks like the technical issues of not using the lower 5 MHz of band 12 have disappeared. How interesting and it will be even more interesting to see if AT&T will at some point support band 12 in their devices or whether they will stick to the band 17 subset. Supporting band 12 would of course mean that a device could be used in both T-Mobile US's and AT&T's LTE networks, which is how it is done in the rest of the world on other frequency bands for ages. But then, the US is different and I wonder if AT&T has the power to continue to push band 17.

One more thought: 2×5 MHz is quite a narrow channel (if that is what they bought, I'm not quite sure…), typically 2×10 MHz (in the US, Europe and Asia) or 2×20 MHz (mainly in Europe) channels are used for LTE today so I wouldn't expect speed miracles on such a channel.

Fake GSM Base Stations For SMS Spamming

If you think fake base stations are 'only' used as IMSI-catchers by law enforcment agencies and spies, this one will wake you up: This week, The Register reported that 1500 people have been arrested in China for sending spam SMS messages over fake base stations. As there is ample proof available how easy it is to build a GSM base station with the required software behind it to make it look like a real network I tend to believe that the story is not an early April fool's day joke. Fascinating and frightning at the same time. One more reason to keep my phone switched to 3G-only mode as in UMTS, mutual authentication of network and device prevent this from working.

USB 3.0 In Mobile Chipsets

Today I read for the first time in this post over at AnandTech that chipsets for mobile devices such as smartphones and tablets are now supporting USB 3.0. I haven't seen devices on sale yet that make use of it but once they do one can easily spot them as the USB 3.0 Micro-B plug is backwards compatible but looks different from the current USB 2.0 connector on mobile devices.

While smartphones and tablets are still limited to USB 2.0 in practice today, most current notebooks and desktops support USB 3.0 for ultra fast data exchange with a theoretical maximum data rate of 5 Gbit/s, which is roughly 10 times faster than the 480 Mbit/s offered by USB 2.0. In practice USB 3.0 is most useful when connecting a fast external hard drives or SSDs to another device, as these can be much faster than the sustainable data transfer rate of USB 2.0, which is around 25 MByte/s in practice. As ultra mobile devices such as tablets are replacing notebooks to some extent today, it's easy to see the need for USB 3.0 in such devices as well. And even smartphones might require USB 3.0 soon, as the hardware has become almost powerful enough for them to be used as full powered computing devices such as notebooks in the not too distant future. For details see my recent post 'Smartphones Are Real Computers Now – And The Next Revolution Is Almost Upon Us'.

Making use of the full potential of USB 3.0 in practice today is difficult even with notebooks that have more computing power than smartphones and tablets. This is especially the case when data has to be decrypted on the source device and re-encrypted again on the target devices as this requires the CPU to get involved. This is much slower than if the data is unencrypted and can thus just be copied and pasted from one device to another via fast direct memory access that does not require the CPU. In practice my notebook with an i5 processor can decrypt + encrypt data from the internal SSD to an external backup hard drive at around 40 MByte/s. That's faster than the maximum transfer speed supported by USB 2.0 but way below what is possible with USB 3.0.

If You Think Any Company Can Offer End-to-End Encrypted Voice Calls, Dream On…

I recently came across a press announcement that a German mobile network operator and a security company providing hardware and software for encrypted mobile voice calling have teamed up to launch a mass market secure and encrypted voice call service. That sounds cool at first but it won't work in practice and they even admitted it right in the press announcement when they said in a somewhat cloudy sentence that the government would still be able to tap in on 'relevant information'. I guess that disclaimer was required as laws of most countries are quite clear that telecommunication providers are required to enable lawful interception of the call content and metadata in real time. In the US, for example, this is required by the CALEA wire tapping act. In other words, network operators are legally unable to offer secure end-to-end encrypted voice telephony services. Period.

Wire tapping to get the bad guys once sounded like a good idea and still is. But from a trust perspective, the ongoing spying scandal shows more than clearly that anything less than end-to-end encryption is prone to interception at some point in the transmission chain by legal and illegal entities. Also, when you think about it from an Internet perspective, CALEA and similar other laws elsewhere are nothing less than if countries required web companies to provide a backdoor to their SSL certificates so they can intercept HTTPS secured traffic. I guess law makers can be glad they came up with CALEA and similar laws elsewhere before the age of the Internet. I wonder if this would still fly today?

Anyway, this means that to be really secure against wire tapping, a call needs to be encrypted end-to-end. As telecom companies are not allowed to offer such a service, it needs to come from elsewhere. Also, this means that there can't be a centralized hub for such a service, it needs to be peer-to-peer without a centralized infrastructure and the code must be open source so a code audit can ensure there are no backdoors. Also, no company can offer the service as it would be pressured and probably required by law to put in a backdoor (see e.g. Lavabit and Silent Circle).

This is quite a challenge and requires a complete rethinking of how to communicate over the Internet in the future at least for those who want privacy for their calls. And without companies being able to provide such a service it's going to be an order of magnitude more difficult. For private individuals this probably means that they have to put a server at home for call establishment and to tunnel the voice stream between fixed and mobile devices behind firewalls and NATs. For companies it means they have to put a server on their premises and equip their employees with secure voice call apps that contact their own server rather than that of a service provider.

While I already do this for instant messaging between members of my household (see my article on Prosody on a Raspberty Pi) I still haven't found something that could rival Skype in terms of ease of use, stability and voice + video quality.