Old DVDs And New Drives Don’t Make A Good Pair – Hello Old PC

Optical DVD drives are getting out of fashion in notebooks these days. In theory, that's not a bad thing as it saves space and weight and one can always buy an external USB DVD drive for a few euros should one really need one. The problem is that those I tried in recent weeks are of such bad quality that they fail to read many of the DVDs and CDs I wanted to read.

Read issues often do not appear at first when inserting a CD or DVD but only later when I'm already halfway or two thirds through the content. Sometimes, a DVD that can't be fully read in one drive but works o.k. in another and vice versa. Sometimes a DVD fails in both but at different locations. Quite a mess.

But then I remembered that I have a 15 year old PC still standing around in the corner with 2 DVD drivers from back then, solidly built and quite expensive at the time. Despite their age, though, they've so far been able to read each and every DVD and CD that was partially unreadable on those crappy USB connected DVD drives for a couple of euros.

Perhaps it's time to convert my CDs and DVDs while that computer still works…

The Nibbler 4-Bit CPU PCB Has Arrived

Nibbler-pcb-smLast month, I've reported about a great 4-Bit Self-Made CPU project called "The Nibbler". It continues to fascinate me as the tools that are freely available such as an assembler and a simulator for it help me to get a deeper understanding of how "computers" work at their core. But while simulating the CPU is great fun I'd really like to build the real thing myself. Wire wrapping is not my thing so I went ahead and ordered the printed circuit board for self assembly. Despite being shipped out of Canada it only took a week to arrive. I was quick to order it because there were only a few left. In the meantime the PCB is sold out but there's a waiting list. I guess if there are enough requests there might be second batch. Great, now the hunt for the parts begins 🙂

HTTPS Public Key Pinning (HPKP) Is Great – But Mobile Support Is Only Half Baked So Far

A couple of months ago, Chrome, Firefox and perhaps other browser have begun to 'pin' the HTTPS certificates used by Google, Twitter and others for their web pages. This significantly improves security for these web pages as their certificates can no longer be signed by any of the hundreds of Certificate Authorities (CAs) that are trusted by web browsers but only by one or a few select ones. So far, this functionality was part of the web browser's code. Recently, however, most desktop and mobile browsers have added support for the generic HTTPS Public Key Pinning (HPKP) method standardized in RFC 7469 that enables any HTTPS protected web site to do the same. Time for me to add it to my Owncloud and Selfoss servers too to protect myself from man-in-the-middle attacks.

HPKP-headerHPKP works by adding a public key pin header string to the HTTP response header section that is returned to the web browser each time a web page is loaded. On first request, the web browser stores these and whenever the page from the same domain is loaded again afterward compares the hashes of the HTTPS certificate it receives with those previously stored. If they don't match the page load process is aborted and an error message is shown to the user that can't be overridden. For the details of how to generate the hashes and how to configure your webserver have a look here and here.

The first screenshot on the left (taken from Firefox'es Webdeveloper Network console) shows how the public key pin looks like in the HTTPS response header of my web server. In my case I set the validity of the pinning to 86400 seconds, i.e. to one day. This is long enough for me as I access my Owncloud and Selfoss servers several times a day. As I don't change my certificate very often I decided not to pin one of the CA certificates in the chain of trust but be even more restrictive and pin my own certificate at the end of the chain.

On the PC I successfully verified that Firefox stores the pin hashes and blocks access to my servers by first supplying a valid certificate and a corresponding public pin hash and then removing the pin header and supplying a different valid certificate. Even after closing and reopening the browser, access was still blocked and I could only access my Owncloud instance again after I reinstated the original certificate again. Beautiful.

Opera-hkpk-errorOn Android, I tried the same with Firefox Mobile and Opera Mobile. At first I was elated as both browsers block access when I used a valid certificate that was different from the one I pinned before. The second screenshot on the left shows how Opera Mobile blocks access. Unfortunately, however, both browsers only seem to store the pin hashes in memory. After restarting them, both allowed access to the server again. That's a real pity as Android frequently terminates my browser when I switch to other large apps. That's more than an unfortunate oversight, that's a real security issue!

I've opened bug reports for both Firefox and Opera mobile so let's see how long it takes them to implement the functionality properly.

Stagefright 2 – And Nobody Cares?

News is inflationary… Back in August there was a big wave in the press when it was discovered that Android, through all versions, had a couple of pretty serious remote code execution and privilege escalation vulnerabilities in the libstragefright libraries which are called every time a video is shown or previewed. The wave was as big as it was as the vulnerabilities are easily exploitable from the outside by embedding videos in web pages or messages. Device companies promised to patch their devices in a timely fashion and promised to change they way security patching would be done in the future. For some devices this has even happened, but for many older devices (read 2+ years old) nothing was done. But since the news broke, things have calmed down again. Then, in early October, another batch of serious Stagefright issues was discovered that are as exploitable as the first ones. This time, however, the echo was quite faint.

It really makes me wonder why!? Perhaps this is a result of the vulnerabilities not having been exploited on a large scale so far? Which makes me wonder why not, black hats are usually quite quick to exploit things like that. Does nobody know what to do with smartphones under their control? Or perhaps the bad guys are not yet familiar with coding in assembly language on ARM and how to use the Google Android API? If so then the latest episode was perhaps one of the final warning shots before things get real. Let's hope the good guys use the time well to fortify the castle.

On the positive side, Google has patched the vulnerable code in the meantime and so did CyanogenMod, so my devices are patched.

The Politics Behind LTE-Unlicensed

For some time now, interested companies in 3GPP are pushing for an extension to the LTE specifications to make the technology also usable as an air interface technology for the 5 MHz unlicensed band, currently the domain of Wi-Fi and other radio technologies for which no license is required to operate (i.e. it's free for everyone to use). I wrote about the technology aspects of this earlier this year so have a look there for the details. Apart from the technical side, however, another interesting topic is the politics behind LTE-Unlicensed as not everybody seems to be thrilled by LTE marching into unlicensed territory.

Some parties in 3GPP are totally against LTE becoming usable in an unlicensed band, fearing competition from companies that haven't paid hundreds of millions for beachfront spectrum property. Some cautiously support it in it's current incarnation, which is referred to as LTE-LAA (License Assisted Access), as it requires an LTE carrier in a licensed band to control transmission of an LTE carrier in an unlicensed band. In effect that keeps the would be upstart competition at bay. And then there are those who want to completely release the breaks and extend LTE to make it usable in a standalone way in unlicensed bands. Perfectly irreconcilable. I'm writing all of this because I recently came across an article that sheds some light on what's going on which I found quite interesting.

My Uploads Are Three Times Faster On LTE Than With VDSL At Home

It's a bit ironic but my uplink speed in LTE is three times higher when I'm sitting in a train comunting to work and being connected over LTE than what it would be when sitting at my desk at home and being connected over a 25 MBit/s down + 5 Mbit/s uplink VDSL line.

I just had that thought when I uploaded a 50 MB ZIP file to the cloud in a couple of seconds at around 15 Mbit/s which is, mind you, not the maximum LTE can provide on a 20 MHz bearer, but my uplink speed is limited. It's really time for a fiber link at my home in Germany like I already have in Paris. But unfortunately, German politics creates no incentives for network providers to catch up to more developed parts of the world… Quite the contrary, DSL vectoring is the future as far as the government and the local incumbent is concerned 🙁

Book Review: The Innovators

InnovatorsThe history of computing has me firmly in its grip and so after having read “Fire in the Valley” I continued with “The Innovators”. While the latest edition of the previous book mainly focused on what was going on in Silicon Valley in the 1970s to the 1990s, “The Innovators” expands the story back to Charles Babbage and Ada Lovelace in the 1850’s and ends in the 21st century with the creation and evolution of Google.

“The Innovators” is of course much briefer about what happened in Silicon Valley in the 1970s to 1990’s than the previous book I read. Instead it tells the stories of many other people, how they built on work of their predecessors and how their success was usually due to working in a team rather than doing something on their own all the way as it is often portrayed elsewhere. Also, it doesn’t focus on developments in only a single location it mentions how Konrad Zuse came up with his electromechanical computer in Germany in the 1940’s, Alan Turing in Great Britain, the computers that were built in Britain after the war, how the ENIAC in Philadelphia came to be, the women behind programming ENICAC like Grace Hopper and Jean Jennings, the story of John von Neumann, the Atanasov-Berry computer, how the transistor was invented, again by a team, at Bell labs, etc., etc., etc.

Beginning in the 1960’s the book then continues the the story with the move from transistors to integrated circuits and how Silicon Valley mushroomed from Shockley Semiconductors to Fairchild to Intel and, what I found to be one of the many interesting new insights I gained, that Intel was founded not as a processor company but to produce memory chips as that was seen as the major application of integrated circuits once it was understood of how to cram more than just a few transistors on a die. The microprocessor on a chip only came later and was not envisioned as a product when Intel was created.

I could go on and on about the book but to make it short, I very much enjoyed reading it as it doesn’t only convey facts but also tells the stories of the people and gives a sense of who these people were, how they were growing up and what drove them to do what they did.

Nibbler: The 4-Bit Self-Made CPU

Time flies but it still seems like yesterday when I discovered "But How Do It Know", the must-read book if you want to understand how a CPU works, the central piece of electronics in any electronic device today be it a SIM card, a smartphone, a notebook or a supercomputer at a large computing facility. Once you have read this book and have some background in tinkering with electronics you can build a working CPU with memory and I/O at home. I started doing this and came as far as making shift registers and memory work. Unfortunately, there just wasn't enough time and there were too many components I wanted to use in my design so that's about as far as I got. But perhaps there is yet another way to achieve this goal without waiting for retirement: The Nibbler by Steve Chamberlin.

The Nibbler is a 4-bit CPU with ROM and RAM, fitting on a single medium sized board. The difference between what I had in mind and the Nibbler is that it uses a single chip 4-bit Arithmetic Logical Unit (ALU) which I initially wanted to build out of individual logic chips like adders, etc. Also, I had an 8-bit ALU and address bus in mind, but obviously the 4-bit approach makes wiring a lot more practicable. And finally, the Nibbler simplifies things by separating the program ROM and RAM, which again saves a lot of wires. While the original Nibbler design was done in wire-wrap technology, William Buchholz has picked up the project and produced a "real" PCB which again significantly reduces the time it takes to put things together.

Obviously this design is a bit different from the CPU discussed in "But How Do It Know". But in the end that doesn't matter because the concepts are the same. Using a single ALU chip is somewhat of a compromise between reducing complexity and being as discrete as possible to understand how the stuff really works. Fortunately there's a good description of the 74HC181 ALU chip on Wikipedia and a great diagram that shows how it looks like inside, basically consisting of several function blocks for different arithmetic functions and a total of 75 gates.

Another thing that makes this ALU chip appealing is its historical background. According to the Wikipedia page linked above, it was the basis for a number of historically important computers that were designed before fully integrated CPU chips became available, the DEC PDP-11 being one of them. The PDP-11 was obviously not a 4-bit computer but with the help of a support chip it's possible to daisy chain a number of these 4-bit ALUs to perform arithmetical operations on bytes and words.

Finding stuff like this is one of the reasons why the Internet is so great. Without it, it's unlikely that I would have ever found this project because building your own CPU does not only seem to be a niche, its more of a nano-niche. What I find a bit amazing, and sadly so, is that William didn't sell too many of his boards yet even though the price makes it a no-brainer for those with ambitions to understand how a CPU works not only from a theoretical point of view but also in practice.

15+ Devices At Home With A WiFi Interface Today – But It All Started With An Orinoco

At the end of the 1990's coax based 10 Mbit/s Ethernet was the network technology most companies used at the time, including the one I worked for then. It was also there that I held the first 802.11 wireless network card in my hand. The brand was Orinoco, the company that produced it was Lucent and it could transfer data at a whooping speed of 1 and 2 Mbit/s, just a tiny fraction of what is possible now. Today, 'Wi-Fi card' would be the term used by most but Wi-Fi cards to be plugged into PCs are mostly a thing of the past as most devices now have a Wi-Fi chip and antenna built-in. Gone are also the days when Wi-Fi connectivity was expensive. For less than 10 euros one can buy an 802.11n Wi-Fi USB dongle these days for the few devices that are not Wi-Fi equipped yet.

So much for the history part of this blog entry. I'm writing all of this because I recently realized that I have over 15 Wi-Fi enabled devices at home these days that are in frequent use. There's my notebook of course that I work with every day, a test notebook to try out new things, a notebook mostly used for video streaming, at least 3 smartphones, my spouse's notebook and her 2 smartphones, a Raspberry Pi for audio streaming to the 20 year old Hifi set in the corner, the access point itself, a second access point that also acts as an Ethernet switch and 2 Wi-Fi enabled printers. In addition to these devices that are in use all the time there are at least half a dozen Wi-Fi USB dongles that are occasionally put into good use with about as many Raspbery Pis for various purposes.

Quite an extraordinary development when I think back to this first and hyper-expensive Orinoco wireless LAN card I once held in my hands for the first time and marveled at how it is possible to transfer data so quickly over the air with just such a 'little' card.

We Can’t Afford To Let Any Part Of The Internet Rot In Place

Over the last decade Wi-Fi devices have become tremendously popular. Unfortunately it seems the Federal Communications Commission (FCC) and its counterpart in the EU are becoming concerned that 3rd party software that controls the radio hardware may negatively impact interoperability with other applications using the same frequency bands, e.g. by increasing transmission power beyond the regulatory limits. As a result the FCC and the EU are proposing or have already implemented laws that require the hardware manufacturer of a device to ensure that only their radio software can be used in the device. The problem with that is that instead of 'only' locking down the radio software, manufacturers of Wi-Fi access points and other Wi-Fi devices such as smartphones might be tempted to use this as an excuse to lock-down the whole device thus making it potentially impossible in the future to use Wi-Fi routers with alternative software such as Open-WRT or smartphones with alternative Android derivates such as CyanogenMod.

While the EU has already published a directive to that end that shall come into effect in June 2016, but first needs to be implemented in national laws of the individual member states, the FCC is still in the comments phase of the process. One response, signed by pretty much everyone of the who-is-who in the Open Source community including Linus Torvalds and Internet luminaries such as Vint Cerf, is truly outstanding:

In their response, the authors explain the dire state of the Wi-Fi router market today that is only driven by price but not by quality and responsibility. This leads to hundreds of millions of devices in the field today that are insecure and pose a significant risk to their owners and the Internet as a whole.

To fix both the radio issue addressed by the FCC and the wider issue of software with grave security issues being abandoned by device manufacturers, the authors propose an alternative approach to the FCC's lock-down proposal:

1. Any vendor of software-defined radio (SDR), wireless, or Wi-Fi radio must make public the full and maintained source code for the device driver and radio firmware in order to maintain FCC compliance. The source code should be in a buildable, change-controlled source code repository on the Internet, available for review and improvement by all.

2. The vendor must assure that secure update of firmware be working at time of shipment, and that update streams be under ultimate control of the owner of the equipment. Problems with compliance can then be fixed going forward by the person legally responsible for the router being in compliance.

3. The vendor must supply a continuous stream of source and binary updates that must respond to regulatory transgressions and Common Vulnerability and Exposure reports (CVEs) within 45 days of disclosure, for the warranted lifetime of the product, or until five years after the last customer shipment, whichever is longer.

4. Failure to comply with these regulations should result in FCC decertification of the existing product and, in severe cases, bar new products from that vendor from being considered for certification.

5. Additionally, we ask the FCC to review and rescind any rules for anything that conflicts with open source best practices, produce unmaintainable hardware, or cause vendors to believe they must only ship undocumented “binary blobs” of compiled code or use lock down mechanisms that forbid user patching. This is an ongoing problem for the Internet community committed to best practice change control and error correction on safety-critical systems.

These are powerful proposals and I am delighted that the letter was signed by a huge number of well known and respected people in the industry. But not everyone will like the proposals and I can already see the marching orders for lobbyists of hardware manufacturers to fight this. While many manufacturers have an open source driver for their Wi-Fi hardware today, the software that runs on the Wi-Fi chip itself is usually closed source and only available as a binary blob. Having the source available of this part as well would be truly revolutionary. Requiring that the owner of the device must have ultimate control over the software update process (if he wishes so) is another strong requirement. This wouldn't prevent automatic updates for those who don't care but the ability to stay in control of what you own if you wish to do so.

The paper from which I have quoted the 5 proposals above is well worth a read. It is well written and explains in detail why the FCC should adopt the proposals above instead of what they have initially suggested. So let's see how visionary the FCC can be.

P.S.: The headline of this post is an abbreviation of a quote of Vint Cerf in a recent article on the topic in Businesswire:

"We can't afford to let any part of the Internet's infrastructure rot in place. We made this proposal because the wireless spectrum must not only be allocated responsibly, but also used responsibly. By requiring a bare minimum of openness in the technology at the edge of the Internet, we'll ensure that any mistakes or cheating are caught early and fixed fast"

P.P.S.: And for further background info about EU directive 2014/53/EU that has something similar like the FCC in mind have a look at Julia Reda's recent blog entry on the topic.