Why Open Source Has Become A Must For Me

While the Internet is doubtlessly a great invention and I wouldn't want to miss it in my daily life anymore there are certainly downsides to it. Last year I summarized them in a post titled „The Anti-Freedom Side Of The Internet“. While I have found solutions for some of the issues I discussed there such as privacy issues around remotely hosted cloud services, I have touched one topic too lightly that has become much more apparent to me since then: The changing business models and interaction of software companies with their customers that is not necessarily always to the advantage of the customers compared to pre-Internet times.

In the pre-Internet times software was bought on disks or CDs and installed on a computer. For most commercial software you got a usage license with an unlimited duration and the user was in control over the software and the installation process. Fast forward to today and the model has significantly changed. Software is now downloaded over the Internet and installed. The user's control over the process and privacy is largely gone because most software now requires Internet connectivity to communicate with an activation server of some sort before it installs. While I can understand such a move from the software companies point of view I find it highly controversial from a user's point of view because there is no control what kind of information is transmitted to the software company. Also, most software today frequently 'calls home' to ask for security and feature updates for security and perhaps also for other purposes. While this is good on the one hand to protect users it is again a privacy issue because a computer frequently connects to other computers on the Internet in the background without the users knowledge, without his consent and without his insight into what is transmitted. Again, no control as to what kind of data is transmitted.

And with some software empires on the decline, a new interesting license model, not thought of in pre-Internet times, is the annual subscription model. Adobe is going down that path with Photoshop and Microsoft wants to do the same thing with their Office suite: Instead of buying a time unlimited license once, they now want to sell time limited licenses that have to be renewed once a year. Again, understandable from the software companies point of view as that ensures a steady income over the years. From a users point of view I am not really sure as that means there are yearly maintenance costs for software on computers at home that simply was not there before.

I wonder if that will actually accelerate the decline of those companies? If you buy software once you are inclined to use it as long as possible and perhaps buy an update every now and then. But if you are faced with a subscription model where you have to pay once a year to keep that software activated, I wonder if at some point people are willing to try out other alternatives. And alternatives there are such as Gimp for graphics and of course LibreOffice.

Already today I see a lot of people using LibreOffice on their PCs and Macs so that trend is definitely well underway. Perhaps it also triggered by people not only using a single device anymore which would require more than one paid license. Also, the increasing number of different file formats and versions that make sending a document for review to someone else and getting a revision that is still formatted as before it was sent a gamble, so why stick to a particular program or version of a word processor?

In other words, Open Source is the solution in a world where the Internet allows software companies to assert more control over their customers than many of them are likely to want. Good riddance.

Historical Computing And The Busch 2090 – Simulating A Newer CPU Architecture On An Old Processor

20130902_175609-smIt took a while to get hold of one but I finally managed to get a 1980’s Busch 2090 microcomputer I mused about in this and other previous blog posts. What I could only read about in the manual before I could now finally try out myself on this 30 year old machine and true to the saying that when you read something you remember it but when you do something yourself you will understand, I found out quite a number of things I missed when only reading about it. So here’s the tale of working and programming a 30 year old machine that was there to teach kids and adults about how computers work rather than how to work with computers:

The 2090 is programmed on a hexadecimal keyboard (see figure on the left) in a slightly abstracted pseudo machine code. It makes a number of things easier such as querying the keyboard or to display something on the six digit 7-segment display but otherwise it looks like machine code. After doing some more research into the TMS 4 bit processor used in the 2090, I found out that it is a direct descendant of the first Texas instruments microprocessor with a few more input/output lines, RAM and ROM added. Otherwise the processor works as its predecessor, the TMS 1000 from 1972. In other words when the 2090 appeared in 1981 the processor architecture was rather dated already and much more sophisticated processors such as the Intel 8080, the Zilog Z80, the Motorola 6800 and the MOS 6502 were available. While microcomputer learning kits appearing on the market a year or two later used these or other 8 bit processors, Busch decided to use an old 4 bit architecture. I can only speculate why but pricing was perhaps the deciding factor.

Tms1000-architectureSome research on the net revealed some more material about the CPU and other chips used. The manuals of the TMS 1000 architecture that also includes later versions such as the 1400 and 1600 can be found here and here. These documents are quite fascinating from a number of perspectives as they go into details on the architecture and the instruction set and also give an interesting impression of how what we call ’embedded computing systems’ today were programmed in the 1970s. Simulators were used to test the program which were then included in a RAM on the CPU chip as part of the production. No way to change it later on so it better had to be perfect during the production run.

What surprised me most when studying the hardware architecture and instruction set is that it is very different from the pseudo machine code presented to the user. My impression is that the pseudo machine code was very much inspired by newer processor architectures with a lot of registers and a combined instruction and data RAM residing in a separate chip. The TMS 1600, however, has nothing of the sort. Instructions and data are separate in the chip, all ‘real’ machine code instructions are in a ROM that is accessed via an instruction bus that is separate from the bus over which the built-in memory is accessed.

While the pseudo machine code uses 16 registers, the processor itself only has an accumulator register. The 16 registers are simulated by using the 64 x 4 bit RAM of the TMS 1600, which, on the real machine, is accessed as RAM over an address bus and not as registers. In addition, the processor chip as no external bus to connect to external memory. There are input and output lines but their primary purpose is not to act as a bus system. The 2090 however uses an external 1 kbyte RAM that is accessed via those input output lines. In effect, the small operating system simulates an external bus to that memory chip in which the pseudo machine code the user typed in resides. Very impressive!

There are a number of support chips on the board used for purposes such as multiplexing different hardware units such as the keyboard, the display, LEDs, input connectors and the memory on the input/output lines. As the input/output lines are separate on the chip and do not work like a bi-directional bus, one of the support chips offers tri-state capability for some hardware so it can be removed from the bus.

The TMS 1600 also has no stack as we know it today. Instead it has 3 subroutine return registers so up to three subroutines can called at any one time. This is already an improvement over the original TMS 1000 which only had one such register. Another interesting fact is that the TMS 1600 doesn’t have instructions for integer multiplication and division.

Apart from the accumulator register there are the x- and y-registers. These registers, however are used to address the data ram. A separate 6 bit program counter is used to address the ROM. While the pseudo machine code uses a zero flag and a carry flag, something that is part of all popular 8 bit microprocessor architectures even today, there are no such flags in the TMS 1600. Instead, there’s only a status register that acts as a carry and zero flag depending on the operation performed. Also, the processor doesn’t have the capability for indirect or indexed addressing.

Also quite surprising was that there are no binary logic instructions such as AND, OR, XOR etc. in the CPUs instruction. Therefore, these have to be simulated for the pseudo machine code which contains such commands, again resembling the instruction set of other ‘real’ CPUs at the time.

And another nifty detail are the two different kinds of output lines. There are 13 R-output lines that can be freely programmed and some of them are used in the 2090 e.g. for addressing the RAM chip (address bus) and some for writing 4 bit values to the RAM (data bus). In addition there are 8 O-outputs that can’t be freely programmed. Instead they are set via a 5 bit to 8 bit code converter and the conversion table was part of the custom CPU programming. From today’s perspective it’s incredible to see to what lengths they went to reduce the circuit logic complexity. So a 5 bit to 8 bit code converter, what could that be good for? One quite practical application is to illuminate the digits of a 7-segment display. As only one digit of the 6 digit display can be accessed at a time it’s likely that the 8 O-outputs are not only used for addressing the RAM but also to select one of the six numbers of the display.

Virtually traveling back in time and seeing a CPU like this in action rather than just reading about it is incredible. I now understand much better how the CPU architecture we still use today came to be and how limitations were lifted over time. An incredible journey that has led me to a number of other ideas and experiments as you shall read here soon.

Have Turned Off Auto-Approval For Comments For The Moment

If you have commented in the past couple of days you have probably noticed that the comments are not published immediately anymore. Unfortunately I am getting a lot of spam comments at the moment that are not filtered out automatically. As it is less work to approve valid comments for the moment than to remove the spam I've decided to turn off auto-approval of comments. Sorry for the inconvenience, I'll turn it on again as soon as Typepad can handle the spamming…

Retiring the Dongle Dock

Being a frequent traveler I was one of the first to wish for a product with which I could share a 3G connection over Wi-Fi. My first article about it is back from 2006. It took another two years until in 2008, however, before one of the first easy to setup devices, the Huawei D100 Wi-Fi access point designed to establish an Internet connection over a separate 3G USB stick appeared on the market. Fortunately I was in Austria at the time and could buy an unlocked version for a few euros. I've used it frequently since then and it has become a mandatory travel accessory for me. Now in 2013, however,  i.e. 5 years later I am finally about to retire it.

Thanks to Android, Wi-Fi tethering has now become a standard feature of most smartphones and despite having limits such as the number of concurrent Wi-Fi connections it supports, it is sufficient for my use. The range of the Wi-Fi chip in a smartphone is perhaps not as good as that of the D100 but in practice the distances I need to cover in hotel and meeting rooms rooms are no problem for a smartphone. 5 years for a wireless device in use before it is retired is quite a thing. Back in 2008, the N95 was the latest and greatest in terms of technology, just to give you an idea of the timeframe we are talking about.

Impact of Virtual Machines on Idle Mode Power Consumption

Ever since I discovered the benefits of running Virtual Machines on my notebook for a variety of things and how easy it is in practice I usually have three of them running at the same time. Yes, three of them at the same time and with 8 GB of RAM and using Ubuntu as host operating system makes the experience quite seamless.

A second Ubuntu is usually running in one virtual machine so I can quickly try out things, install programs I only need for a short time and don't want to linger around on my system and to run a TORified Firefox against unfriendly eavesdropping of half the world's security services. Also, by disabling the virtual network adapter and mapping a 3G USB stick or USB Wi-Fi stick directly into the virtual machine gives me a completely separated and independent second computer. Great for networking experiments. The other two machines usually run an instance of Windows XP or Windows 7 for programs that aren't natively available under Linux. There aren't a lot of those but they do exist. As the VMs are usually not in the way I usually start them but never terminate them unless I need to reboot the host. The only thing I noticed is that there is a power consumption impact.

When I was recently taking a long train trip I noticed that the remaining operation time indicated in the status bar was about one hour longer than usual. I was puzzled at first but soon noticed that the difference is that I had just rebooted the day before and I didn't have the need for a VM running since. It's obvious that VMs have an idle power consumption impact because instead of one OS there are usually four operating systems performing their background operations during idle times on my notebook. So while I was surprised I really shouldn't have been. But the takeaway from this is that in the future I know of a good way to increase the autonomy time in case I need it.

The Map On Paper In the Car On The Way Out

I always like to have a backup plan in place in case something goes wrong. For that reason I have kept a paper map of Europe in the car, just in case there's a problem with the maps and navigation app on my smartphone de jour. But recently I noticed that I can't remember when I've last taken it out!?

Honestly I can't and it must be close to 10 years that I haven't used it. This, the fact that the map must now be pretty out of date anyway and usually having more than one device that can run a navigation app with me these days make me think that the map is about to be discarded. Or perhaps I should keep it for historical reasons? The last paper map I bought…

Like telephone booths and coins that are fading away it's one of these things which mobile devices, mobile voice and mobile Internet access have made superfluous. Can you remember the last time you've used a paper map for navigation or orientation?

The ‘Must Read’ Book If You Want To Understand How A Processor Works

When it comes to computers I always had something of a blind spot: I know how memory works, what Boolean logic is, how a computer adds and subtracts, I know what a bus is, what registers are, how to program in machine language, etc. etc. However, I never really quite figured out how the CPU makes data go from RAM to the ALU and, after processing, back to RAM. I always had a vague idea how it works but the control unit with fixed control paths or driven by what is called microcode pretty much remained a black box. Recently I started looking into this topic again and found a number of sources that explain in simple words how a processor works, including the control unit.

An incredible resource I found is a book called "But How Do It Know – The Basic Principles of Computers for Everyone" by J. Clark Scott. I wouldn't have thought it's possible but within 30 minutes with this book understood how a control unit in a CPU works (based on my previous understanding of how all other parts worked). And I didn't only understand only sort of how it works, but how it really works. The book describes how a CPU and memory works in less than 150 pages and although that might be considered short it goes into the details down to the gate level. And it does it in a language that can be understood by anyone even without prior knowledge of electronics. Over decades I tried to understand how this works and always had to abort my efforts at some point. And then the mystery is solved by the book in 30 minutes. It's almost shocking as is the price of only 16 Euros for the paperback version.

There's a 20 minute video on Youtube that is based on the book, also highly recommended. While the video is great, you should keep in mind, however, that the book goes into much more detail without becoming complicated or boring. Yes, I am very enthusiastic about the book, it has been a real eye opener.

While the book describes a traditional 'wired' control unit with gates, some processors also use a "microcode" based control unit. That sounds even more complicated but if you have good prior knowledge of how a CPU works (e.g. by reading the book above) and then have a look at this project that shows how to build a CPU on your own that uses a microcode based control unit you'll see that a microcode based control unit is actually simpler to understand than a traditional control unit with gates. Another revelation for me!

Another Raspi Application: Check and Report Call-by-Call Tariffs

In Germany we have this great system in place since the liberalization of the telecoms industry in the 1990's called "Call-by-Call" that lets fixed line customers select a carrier to use for national and international calls by dialing an additional access code before the telephone number. In practice it has significantly spurred competition and has brought prices down. But there's also the danger that some black sheep change prices quickly, first getting customers with low prices and then increasing their price ten fold over night without informing them. But there are two cures now:

The first cure is that call-by-call carriers now have to play an announcement before each call to inform customers of the true cost. Problem fixed one might thing. Well not quite because my use of call-by-call is a bit different. I mostly use it for calling abroad from my mobile. Direct calls to international numbers are prohibitively expensive, somewhere between a Euro or two a minute. The same calls from my fixed line at home are in the order of 2 to 3 cents a minute. So the solution for me I have used over many years is to call one of my fixed line numbers at home which is permanently forwarded to one of the few international destinations I call frequently. Praise to home ISDN that gives one several phone numbers (5 in my case). The catch when combining unconditional call forwarding and call-by-call is that no announcement is made because it's not the caller that pays but the forwarder. So my problem was how do I get informed of price hikes?

In the past I manually had to look up prices and was sometimes surprised by the price hikes. Well, no more because now one of my Raspi's does the job for me. With the background I have gained when programming my water alarm system with email notification and the automatic download and backup of all pictures of this blog recently I could quickly put together a script and a Phyton program invoked by cron once a day to download the web page with the prices of my current call-by-call provider, parse it for the prices to the destination countries I'm interested in, extract the price, compare it against maximum values defined and send me an email with the prices and a general verdict in the subject line if everything is still o.k. And with the code for sending an email already there from my previous project and the knowledge I have gained previously of how to parse through strings in Python it took less than two hours to get it working.

Another cloud service at home, excellent! So why am I writing about this? Well, first of course because I'm proud to have done it and done it that quickly. But I also write about it because it shows what can be triggered once you have computing resources available at home running 24/7 that you can play around with and use your imagination and creativity.

And if you want to have a look at the code here we go: Download Call-By-Call-Pi-Code

How 2G We Still Were 10 Years Ago

2003 December 10 - Advertising - still GSM - 1 - sm
2003 December 10 - Advertising - still GSM - 2 - smDuring a recent basement search session I came across a couple of old newspaper pages I used for wrapping some stuff from December 2003, i.e. from about 10 years ago with advertising on them for mobile phones. I bought my first UMTS phone in December 2004 with hardly any network to use it with where I lived at the time. These advertisement pages predate my UMTS entry by a year and clearly show that UMTS was nowhere in sight then. GSM and GPRS phones being advertised on both pages, MMS and VGA (640×480 pixel) cameras were the highlights of the day. I also wanted to compare prices a bit but based on the information contained in the advertisement it's not really possible. Also no prices whatsoever were given for GPRS. And it was only 10 years ago…

TRIM your SSD

About a year ago SSD prices had fallen to around €300 for a 500 GB drive and I couldn't resist any longer to replace my notebook hard drive for an SSD. The speed-up has been tremendous and I haven't regretted it since. Just for the fun of it I've been keeping a close look at how much data gets written to the SSD and even with a huge safty margin, the 5 GB of data that gets written to the SSD per day with my usage pattern I will not wear out my SSD for at least 30 years. Always ready to optimize, however, I noticed recently, that TRIMing was not activated for the drive.

As the Flash cells contained in an SSD drive can only be rewritten a couple of thousand times, SSDs have sophisticated algorithms to ensure write operations are distributed evenly over the complete drive. That requires that sometimes data of static files that never change is relocated to another part of the drive so those little used Flash cells can be used for something else. This is known as 'wear leveling'.

When a file is deleted, the blocks are marked as empty in the file system. Unfortunately, the SSD knows nothing of file systems as this is a higher layer concepts and SSDs only deal with blocks. That means that once a block is written to the SSD doesn't know whether it contains valid data or whether the block contains to a file that was deleted. In both cases the wear leveling algorithm has to copy the data somewhere else before the block can be reused for other data. This is obviously sub-optimal as a block that contains data of a previously deleted file doesn't have to be copied before it is overwritten.

This is where the TRIM command comes in. When a file is deleted, the file system first deletes the file and marks the blocks the file used as empty. That's standard procedure. But then, in addition, it sends a TRIM command to the SSD informing it that the block is empty. The SSD can then also mark the block as empty and doesn't have to bother copying the useless data it now contains when that block is due for wear leveling replacement.

By default, Linux has TRIM disabled but it's pretty simple to activate it by putting the "discard" option in /etc/fstab for the partition that resides on the SDD. Here's an example:

UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /  ext4    discard,errors=remount-ro 0       1

After a reboot TRIM should be activated. This post explains how to make sure it is active. And for more details on TRIM here's the Wikipedia entry!