C64 Vintage and Virtual Hardware For Exploring The Past

C64 and virtual 1541 drive-smBack in the early 1990's when I got my first IBM PC clone I gave little thought to transferring my documents from my previous non-IBM PC clone computers, the legendary C64 and Amiga over into the new world. I'm not sure why but it didn't seem important then. As a consequence the earliest digital records that I have on my computer today date back to 1993. Today, that's of course a bit of a pity. With a bit of luck, however, a lot of disks and tapes should still be on the attic of my parents house and at some point I'll go and get them for a closer inspection. The big questions is however, how to view and eventually migrate them to the PC!? After all, even the small 3.5 inch floppy disks in C64 and Amiga format are incompatible to the old 3.5 inch floppy format used in the PC world.

So I started a little project to get a vintage C64 back up and running again and in addition I bought a little piece of hardware that emulates a 1541 floppy drive on the C64's IEC bus and stores virtual floppy images on a standard Microsoft FAT formatted SD card. The device comes in the shape, color and design of the original 1541 floppy drive but shrunk to the size of a matchbox. Beautiful engineering and the only thing that is missing is the noise the original drive made! The smallest of SD cards will suffice to get it working, because, after all, a single 5.25 inch floppy in the C64 days could only hold around 170kb of data. There's tons of virtual C64 floppy images out there but I'm sure they'll all fit on a single 2 GB SD card. The sd2iec adapter comes with a virtual floppy image explorer that runs on the C64 to select the desired floppy image to work with. The 1541 emulator box also has a button to switch from one floppy image to the next, which is handy when programs require more than a single floppy.

An example of this is and my prime use case is GEOS, the graphical user interface of the C64 of Berkeley Softworks that very much looked like the first MacOS GUI. GEOS is booted from a start disk but all applications such as GeoWrite, GeoPaint, etc. are stored on separate disks. No problem with the push button to virtually change floppies. A floppy image of GEOS and the write and paint program are available on the net and they work perfectly on the real vintage C64 and the virtual 1541 drive. To see if I can actually export my documents that I wrote with GeoWrite at the time I created a new GeoWrite file and wrote it to the virtual disk. The content of the virtual floppy can then be imported from the SD card on the PC with 'cbmconvert'. And once that step is done, individual GeoWrite documents can be converted to a text file with a GeoWrite converter program. Unfortunately, images and and formatting are lost in the process but I guess for my purposes the text is the most important part anyway and this worked with my test document. I had a look at the Geos Programmer reference guide that is available at archive.org and luckily the file format is described there in detail. So should I want more than just the text it could be a fun project to fully convert GeoWrite files and images to something readable with a PC today.

Perfect, the proof of concept works, so the next step is to get my hands on the real files in case they still exist…

SSH Client Certificates to Talk to My Raspberry PIs

I like to interact with my Raspberry PIs at home on the shell level for lots of different things and I can't count the number of times I open a remote shell window every day for various purposes. I also like to keep my virtual desktop tidy so I usually close shell windows when I'm done with a specific task. The downside is that I have to type in the server password frequently, which is a pain. So recently a colleague of mine gave me the idea to use ssh client certificates to get rid of the password promts when I open a new ssh session to a remote server. There are a few things that have to be put into place and I thought I'd put together a quick mini-howto as the information I could find on the topic was a bit more confusing than necessary.

Step 1: Create a public/private key pair on the ssh CLIENT machine

  • Check that '~/.ssh' exists
  • Generate a public/private keypair with: 'ssh-keygen -t rsa'
  • The command generates the following two files in '~/.ssh': id_rsa and id_rsa.pub

Step 2: Put the public key part of the client on the ssh SERVER machine

  • Check that in the home folder of the user you want to login as that the .ssh directory exists
  • Then do the following:

cd .ssh
nano authorized_keys

  • Add the content of the client id_rsa.pub file to the authorized_keys file on the server side

Step 3: Configure the SSH Daemon on the SERVER machine to accept client certificates

These commands make the SSH daemon accept certificates:

  cd /etc/ssh

  sudo cp sshd_config sshd_config.bak

  sudo nano sshd_config

  –> make sure the following three lines are uncommented:

  RSAAuthentication yes
  PubkeyAuthentication yes
  AuthorizedKeysFile %h/.ssh/authorized_keys

  • Restart the SSH daemon to finish the process with: 'sudo /etc/init.d/ssh restart'

Once done, ssh can be used the same way as before but there's no password prompt anymore. Great!

Migrating My Owncloud At Home To A NUC

A little bit more than a year ago, my attitude to the "cloud" changed dramatically when a combination of an inexpensive Raspberry Pi and Owncloud enabled me to run my own calendar and contact synchronization service from a server at home. Also, exchanging large files and files between my mobile devices that I don't want to upload to a commercial server to a has become very easy, again thanks to the amazing Owncloud software.

While for contacts and calender synchronization the Raspberry Pi is fast enough, there is a noticeable delay when logging into the web interface or when someone I share a file with clicks on a link. A couple of weeks ago I decided to do something about that and started thinking about an alternative hardware setup. In the end I chose an Intel NUC (Next Unit of Computing) with a Celeron x86 processor, as it's only about twice the size of a Raspberry Pi but has significantly more processing power for the times when it's needed.

Raspi-vs-nucThe picture on the left shows the two devices side by side. In terms of power consumption there is of course a difference. The Raspberry Pi requires 2.5 watts on average when running Owncloud while the NUC requires around 6 watts. From a yearly power bill point of view that's a difference of around 10 Euros and thus quite acceptable. Unlike the Raspi, the NUC has a fan but it's almost not audbile at all and the box gets hardly warm at all, at least with the type of usage I have.

There are also NUCs with faster processors and newer architectures available, such as for example Haswell based i3 and i5 processors but they are still significantly more expensive than the older Celeron version. The NUC itself cost 139 Euros, the 32 GB mSATA SSD drive cost 35 euros and the 4 GB RAM cost another 30 Euros. In total I paid around 200 euros for the hardware which is around 6 times more expensive than a Raspi.

As far as processing speed is concerned, the difference is very noticeable. The delay of 15-20 seconds when logging-in the first time or before a web page is shown when someone clicks on a download link is now virtually gone. Also, it now only takes around 3 seconds to initially load the 300 contacts into the web interface when I click on the icon for this feature.

Server software wise I decided to go for 'Ubuntu 12.04 LTS Server' as 14.04 LTS wasn't quite around the corner when I installed the system. Installing the OS was almost a breeze but I had to do it twice as for some strange reason it couldn't write the boot sector the first time I tried. Perhaps this had something to do with disabling UEFI in the BIOS and some other boot related settings because things worked when trying once more after changing these values in the BIOS. Fortunately it's also possible to enable auto boot in the BIOS when power becomes available so a power outage doesn't leave the server out of action.

I've been running the new setup for a while now and I'm very happy with it. So if you run a similar Owncloud setup at home an need more speed I can fully recommend moving over to faster hardware at a still quite affordable price.

Peering: Who Pays Whom and Why? Level-3’s Point of View

In the past couple of weeks I had a a few posts on what peering is and who pays whom for what. Here's my book review that describes the topic in detail and here's a link to my post on the difference between network neutrality and Internet Service providers (especially in the US) trying to charge the content providers for traffic. Today I came across two posts on Level-3's blog (see here and here) that gives their perspective on the matter. An interesting read and a practical example of the theory I discussed in my previous blogs. Well worth the read!

(via 'I Cringely')

Sunspider Smartphone to Notebook Speed Comparison – 2014 Update

At the end of 2012 I had a post in which I described the results of a speed comparison between notebooks and smartphones. One and a half years later I decided it's time for another look and see if and how the world has moved on.

Again, I've been using the Sunspider test suite that runs in a web browser. Not only hardware has moved on but browsers might have a more optimized Javascript engine in the meantime and the Sunspider test suite has also been updated from version 0.9.8 used in the previous post to version 1.0.2 used in this post. At least on the notebook side it doesn't make any difference, however, as the result of the benchmark on the same Intel i3 2367M, 1.4 GHz driven notebook came in with almost exactly the same result (416 vs. 410 ms) as one and a half years ago.

So here are my 2014 results with current hardware:

178 ms, Macbook Pro, Intel i7, 2.4 GHz, Firefox 28, OS X 10.9.2

260 ms, Lenovo E330, Ubuntu 12.04, Firefox 28, 260ms (Intel i3-2348M CPU @ 2.30GHz)
534 ms, Lenovo E330, Ubuntu 12.04, virtualized Windows 7 running on the Ubuntu host)

416 ms, Lenovo E130, Ubuntu 12.04, Firefox 28, (Intel i3, 2367M, 1.4 GHz)
—> 410 ms, direct comparison to Sun Spider 0.9.8 with Firefox 16.0.2 in the previous test

411 ms, iPhone 5S (€700+), ARM64, native browser, result taken from here.

(1266 ms, Netbook,  Intel Atom N270 (first generation), 1.6 GHz, Firefox 16.0.2, Ubuntu 12.04, (2009))

1376 ms, mid-range Android 4.2.2 based smartphone (€250), Opera Mobile browser

1928 ms, low end Android 4.3 based device (€130)

The direct comparison shows that both the notebook and the smartphone worlds have moved on significantly. The iPhone 5s has twice as much single CPU power than its predecessor and my current notebook based on an i3 processor is twice as fast as the notebook I used one and a half year ago. The mid-range Android phone now has the CPU power a flagship Android smartphone had one and a half years ago. Note that I didn't measure the 2009 Intel Atom based netbook again (hence the line is in brackets) but just put it here for comparison sake to show where fast smartphones sold today are compared to netbooks of the 2009 timeframe. Quite impressive!

The State of LTE Carrier Aggregation in Practice

LTE networks are up and running for five years now and we have certainly come a long way in terms of speed, stability and usable devices since 2009. The next step in the race for ever faster speeds is Carrier Aggregation (CA), i.e. the simultaneous use of several LTE carriers in different bands. While a lot has been specified to have a lot of flexibility in practice I mainly see the following CA deployments in the field today:

South Korea and the US seem to be the countries with the most pressing need for CA as for various reasons they are limited to 10 MHz carriers. Verizon for example has thus started deploying carrier aggregation of two 10 MHz carrier, one in the 700 MHz band and one in the 1700/2100 AWS for a combined bandwidth of 20 MHz.

In Europe, Germany seems to be the country most interested in Carrier Aggregation. Here, operators already have 20 MHz carriers on air in the 1800 MHz and 2600 MHz band (bands 3 and 7). In addition, three operators have a 10 MHz carrier in the 800 MHz band (band 20). In other words, they use carrier aggregation to go beyond the 20 MHz they already have. One network operator combines spectrum in their 800 MHz and 2600 MHz for a total carrier bandwidth in the downlink direction of 30 MHz. Another operator is about to aggregate resources in the 1800 MHz and 2600 MHz band for a total of 40 MHz, i.e. twice the bandwidth that is aggregated by Verizon in the US.

So far, only few devices support Carrier Aggregation but by the end of 2014 I expect that it will be quite a handful so from my point of view this is the state of the art in deployed networks at the moment. Looking a bit into the future there are a couple of further enhancements in the pipeline. On the one hand, data transmission rates could be increased by using more than the two antennas on the base station and mobile device side. 4×4 MIMO has been trialed already but the difficulty is how to get more than the 2 antennas per sector on rooftops without increasing the size and weight of the antennas unduly. On the mobile device side there's a similar dilemma, perhaps not so much in weight but in available space for even more antennas. Time will tell. And also a bit further down the road is carrier aggregation with three independent component carriers. 3GPP has just recently standardized the new device categories 9 and 10 for the purpose with a theoretical maximum downlink speed of 450 Mbit/s (20 MHz = 150 Mbit/s, 40 MHz = 300 Mbit/s, 60 MHz = 450 Mbit/s). This whitepaper by Nomor research contains some interesting details on this.

The Selfoss Advantage – Full Articles Instead of RSS Stumps

And here's another post on Selfoss, my RSS server at home that aggregates the news and posts from the websites I'm interested in: What I only realized when going through the code is that for a number of predefined news websites, the server does not only grab the content of the RSS feed but actually downloads the text of the full article linked in the RSS feed and inserts it in my reading list. That's particularly useful when I read my news stream on a mobile device as especially in this scenario I'm not fond of having to open up new tabs and having to wait for a full desktop size web page to download. A really nice feature!

Unfortunately this functionality requires dedicated code for each web site as all the 'custom clutter' around an article's text needs to be removed. But again, open source shines in this regard. As there are a few web sites on my reading list that only offer text stubs in their RSS feed and overblown web pages that take too long to download I decided to something about it. So I recently spent some time extending Selfoss a bit to expand to the full text content for those feeds. Once done and tested I learned how to use GIT and Github and offered my code via a 'pull request' to the Selfoss project. And indeed, my code will be part of version 2.11 to be published soon.

I really like open source 🙂

I Can’t Reach My Server – The Internet Must Be Down

There is a running gag between me and my fellow engineers I work with, that if the website of one of the most popular tech magazines in Germany can't be reached that the whole Internet must be down. This is because whenever one of us wants to check if there is Internet connectivity we type in the URL of this website to see if we can reach the site as it is (almost) always reachable.

So far so good. Recently, I was reminded of this running gag when I was in Taiwan and wanted to reach one of my servers at home and got no response either via the DSL line they are connected to nor via the LTE backup link. A quick interrogation of my GSM power socket via SMS revealed that there was no power outage either. So what could be the reason for this!?

As a next step I performed a traceroute and noticed that up to the edge of my provider's network in Germany, everything was working. After that, however, responses stopped coming in. So indeed for about half an hour the fixed line and wireless network of one of Germany's largest network operators was not reachable from the outside. Few probably noticed as it was 3 am local time. As I was in Taipei, however, it was 9 am for me and I did notice.

I wonder what will happen next time I travel!? I've had a DSL outage before while I was traveling, a city wide power outage interrupted communication last December when I was on vacation, a power outage caused by construction work while I was on vacation and now a backbone router outage on another trip. And whenever I think that I can't imagine anything else, reality shows me another possibility.

Welcome to the connected world!

Shell Windows, DOS Boxes and Real Teletypes

If you are old enough, you might remember that 'shell' windows and 'DOS boxes' on graphical user interfaces are a virtualization of physical terminals that were connected over a serial interface to a mainframe computer. Sounds pretty ancient today, doesn't it!? But actually that wasn't the beginning as those hardware terminals with keyboards and screens were themselves a virtualization of the initial input output device connected to a computer over a serial interface: A mechanical teletype machine and paper tape reader.

When I recently did some history research I came across this video on YouTube that shows how an original teletype machine was used before the days of hardware terminals with screens to load programs from paper tape into an Altair 8800, the first personal computer. Once the program is loaded the teletype is then used to type-in instructions and programs and see the result printed out on paper. One thing is for sure. After watching this video you'll never quite look at a shell window and a blinking cursor as before.

Wikipedia LTE Band Sorter

A quick entry today for a feature I've just found that I find incredibly helpful. As those of you involved in LTE know, the LTE band numbers were assigned in order of request and not the frequency range that is covered by a band number. In other words, tables that are sorted on the band number jump widely through the frequencies. Quite a number of times I wished for a list that is sorted by frequency and not band number, e.g. to see overlaps or proximity. It turns out that Wikipedia just has that option. Have a look at the LTE band table in the E-UTRA entry: In the header section there are small up- and down arrows for each column so the table can be sorted on any column. How neat!