A quick observational post today from the Hong Kong metro. Not surprisingly, smartphones and tablets are universally used by HK residents in the metro these days. What I found surprising, however, is what they do with them while commuting. While I was expecting that most of them would browse the news, this was actually an activity I couldn't only seldomly observe. Instead, by far the number one application is texting in all forms and shapes. Next is playing offline games, followed by watching videos and making phone calls. Only few people were actually using it for web browsing or reading and ebooks. Quite a different usage from my own in public transportation. But then I am fortunate enough that my suburban trains are usually only two thirds full and I usually get a place to sit and open up the notebook for writing blog posts, emails, etc.
Author: Martin
Fixing ALL login issues for web service logins with SQRL
In the past couple of years we've become accustomed to weekly news of grand scale username and password thefts at major web services. As many people use very insecure passwords that can be cracked in seconds and by using the same passwords for many web services, usernames and passwords have become very insecure. In addition, viruses and Trojan horses try to get username and password combinations directly on PCs to get access to banking web sites and other high value targets. To me it looks like the situation is getting more and more out of control. While two factor authentication (e.g. an SMS with an additional code being sent by the bank before a transaction is made) fixes some of the issues for some web services, it's too cumbersome for everyday logins. But now Steve Gibson, famous for his SpinRite product and perhaps even more for his weekly Security Now podcast has come up with a solution that fixes all of this. Too good to be true? I thought so, too, at first but it seems he's really figured it out.
The core of his solution that he named SQRL (Secure QR Code Login) is that web services no longer store usernames and passwords but just a public key that was sent from the user when he first registered to the web site. For login, the web site sends a random number that is encrypted on the client side with the users secret key to generate a response. On the web service's side the response is decrypted with the public key agreed during initial registration. In other words, the secret password is no longer in the hands of the web service but in the hand of the user. That means that there is no longer a password database with millions of entries worth stealing on the web service's side. As each web service gets a different public key with the SQRL method and a different random number is used for each login, there's no password leakage between services due to the user of the same username and password for different sites as done by many users today to make their life simpler. Also not to underestimate is the advantage that no password has to be typed in, which fixes the issues that simple to remember and easy to crack passwords are used.
On the client side the use of SQRL is straight forward. Either a smartphone is used to scan a QR code on the login page for an out-of-band authentication which is the most secure way to access a web service in case the secret key can be stored securely on the mobile device. Also, implementations are possible with a browser plugin that detects that a web service offers SQRL login and automatically generates the response.
For more, head over to Steve's page that explains the details or listen to the podcast /videocast on the topic where he introduces SQRL starting at around 38 minutes into the podcast. I am amazed and very enthusiastic about it and hope we'll see implementations of this in the wild soon.
First Multi-Frequency LTE Networks Observed in the Wild
While LTE roaming and LTE for prepaid SIMs is still not really a reality so far I was positively surprised to see that a number of network operators are already deploying LTE on several frequency bands. This is unlike UMTS which most network operators have only deployed in one band at any one location (yes, there are a few exceptions but not many).
One multi-frequency LTE deployment I have recently observed, e.g. with my inexpensive layer 1 scanner solution (for details see here and here), is the Vodafone LTE network in Den Haag in the Netherlands where they have LTE active in the 800 MHz digital dividend band as well as in the 1800 MHz band. Add to that the capacity of their 3G network on the 2.1 GHz band and it's a fair assumption they won't run out of capacity any time soon.
And the second deployment I have observed is China Mobile Hong Kong. This one is very interesting. On the 1800 MHz band they have deployed a 3 MHz carrier (!!) while further up on the frequency dial, they are on air with a 15 MHz carrier in the 2.6 GHz band. As 3 MHz isn't really all that much I wonder if that 3 MHz carrier will still be on-air in a year or two down the road when not only most but all LTE capable phones that support the 1800 MHz band also support the 2600 MHz band. Another option would be of course to get some additional 1800 MHz spectrum and then to increase the bandwidth. But I'm not into Hong Kong spectrum assignment details so I don't know if that's an option for them down the road.
How To Get A SIM For Internet Access At Hong Kong Airport
I've been to Hong Kong recently and as I like to try out local networks and also to have a plan B just in case the hotel Wi-Fi is crappy I wanted to buy a SIM card at Hong Kong airport of one of the local network operators. It turns out that Hong Kong is one of the places where it's quite easy to get a SIM at the airport. A Google search brought me over to this recent blog entry that describes a number of options.
The blog entry recommended One2Free, an MVNO on the CSL network who offers a weekly all you can eat data plan on a prepaid SIM for 79 HK dollars which is around €8. Getting the SIM card took me about 10 minutes and in the five days I was in Hong Kong, I almost burned through a gigabyte of data for everything from email to Skype video calling without the connection being throttled at some point. Data rates were o.k. with up- and downlink speeds in the 3-4 Mbit/s range, thanks in part to the CSL in-house coverage in my hotel.
And it turned out that I was in dire need for a plan B as the hotel Wi-Fi network was slow in the evening to be positive about it and in addition my company VPN couldn't connect through that network. A colleague from another company had a similar problem. Over the CSL 3G connection, the VPN worked just fine. I'd say those were 10 very worthwhile minutes at the airport.
I’m Now Also Disabling 2G For Data – I Need a ‘3G/4G-Only’ Switch
If smartphone user interface designers are reading this blog I have a feature request: I need a 3G/4G only switch in the settings to disable GSM while leaving the device the option to roam between UMTS and LTE. Let me explain:
Last year I decided to set my smartphone I use for voice and small screen web browsing to '3G only' mode to prevent fallback to GSM, as HD-Voice (WB-AMR) continues to be only deployed in 3G, and dropping to GSM during a call results in a noticeably worse speech quality. Also, I like receiving my emails during lengthy conference calls and get some background information over the web sometimes which is also blocked after a fallback to 2G. Also, air interface security is better on 3G. So far, so good, this works well.
For data connectivity for my notebook on the daily commute and on longer train trips I use Wi-Fi tethering to another device that is LTE capable. Unfortunately, LTE is not as widespread as 3G yet so I have the network type selection set to 2G/3G/4G. This way, I get LTE when it's available in stationary places and 3G on the train as LTE coverage is still somewhat patchy. Hence the device is then stuck on UMTS for the rest of the trip because there is currently no way to get from UMTS to LTE while in Cell-DCH state. But it's still mutli-megabits per second so no real complaints here for the moment. Not so good, however, is that the connection sometimes even drops to GSM and then it takes quite a while to get back to UMTS as the device is busy transmitting data and has only little time to search for a reappearing UMTS network. But these days, GPRS or EDGE are almost unusable for the amount of data that I consume on the notebook so I wonder if that fallback still makes sense?
Therefore I'd rather like the Wi-Fi hotspot device not to fall back to GSM at all and just 'ride-out' the temporary lack of UMTS coverage as I have the impression that the device finds the UMTS network again much faster. Obviously I can set the device to UMTS only mode just like my smartphone but then I disable LTE as well which I really like in stationary places due to the much faster uplink compared to UMTS. In other words, I'd really like to see a 'UMTS/LTE-only' mode setting.
The DIY-CPU Project – The Clock Generator
After reading about how a simple CPU works in J. Clark Scott's book and other sources I've been toying with the thought about building a CPU myself. It's a bit of a project and it will take a while but the journey is the reward. As one has to start somewhere I decided to start with the clock as the details of how one is built is a bit vague in the book.
As the CPU is for educational purposes I want it to run at a clock cycle of around one Hertz, so one can actually see how things are working. In addition to a long clock cycle to drive an 'enable' signal to put something on the data and address bus that interconnects everything, a shorter clock cycle in the middle of the overall clock cycle is required to drive the set inputs to a number of components such as registers, the memory etc. to tell them to take over what's currently on the bus. These two signals can be generated out of a clock and a delay of itself of half a clock cycle. The book describes how to use AND and OR gates to generate the enable and set pulses ot of those signals but not how to create clock signal itself and how to derive the delayed clock from the original clock.
So I improvised a bit and used two inverters (NOT gates) with a resistor and capacitor for a 1 Hz clock generator (in the middle of the picture), a transistor, some resistors and a capacitor for the delayed but inverted signal (the right part in the picture) and another NOT gate to revert the delayed impulse back. The circuit with the AND and NOT gates described in the book to generate the two output signals are shown on the left together with two LEDs to visualize the final signals.
The result looks a bit complicated but it's actually not, because there are three distinct and independent building blocks that can work independently of each other. One thing that makes things look a bit complicated is the use of one AND of the chip on the left and three NOT gates in the chip in the middle to create a single OR gate.
Using parts of the electronic kits I got as a teenager and parts of kits I recently bought to have a more complete setup was ideal for prototyping the circuit. I'm sure there are a million ways to build this more efficiently and with fewer parts. But efficiency was not the point of the exercise. There we go, the first step towards my own CPU is done.
Anti-Noise Headset for the Mobile Traveler
I spend a lot of time commuting and traveling to far away places so I spend a lot of time in trains, cars and planes. Especially in cars and planes I usually make good use of the time by reading or writing something, such as this blog entry for example. But there's one thing usually in the way and that's the noise made by the vehicle itself, frequent (useless) announcements and other travelers as well. Up to a certain level, I can ignore it and get on with whatever I do. But at some point, especially when people close to where I am start talking my concentration is usually gone. Earplugs help somewhat but only to a certain extent. I've long wished for noise canceling headsets to go further. I had some in the past but they had limited effect and when I lost the plastic ear plugs and couldn't get replacements I never ventured into this area again. Then recently, I read a number of raving reports in several places about the new Bose QC20 in-ear noise canceling headsets. To say they were positive would be an understatement so I couldn't wait for them to become generally available (looks like the Bose PR department has done their job well).
What's definitely not an understatement is the price. 300 Euros is a tough number but for real good noise suppression I was willing to spend the money. So I got myself a QC20 and swallowed hard when swiping the credit card through the readier, ah, no actually when clicking on the "One Click To Buy" button online.
Needless to say I couldn't wait for them to arrive and give them an instant test. Amazing, when pressing the silence button the external environment in trains, train stations and office just goes away. If a person nearby speaks loudly a little extra music in addition to the noise suppression makes that sound go away, too. Incredible.
The other thing that always bothered me about in-ear headsets is that they get uncomfortable after a while. The QC20 however is not an in-ear headset, however as its not held by pressing something into the ear channel. Instead, it fixes itself to the ear with a plastic hold that fits inside the ear cups. Perfect, I've worn them for several hours over several days now and it never hurt a bit, not even after several hours of wearing them.
And finally when not suppressing the nose the headset still analyzes the sound environment and compensates for the plastic isolation over the ear. This is great as without it, just like with other in-ear headsets, the external environment sounds artificial and I get a strange and uncomfortable feeling when I speak myself as that has a strange effect on a blocked ear canal. The compensation works great and it almost feels like not having earplugs in at all when switching to "listen to the outside world" mode.
I have high hopes for my next plane trips as well. On intercontinental flights, current 'over ear' headsets were of little use to me as one can't sleep with them when trying to sleep on the side. With the QC20 in-ear, or rather on-ear headset it might just be possible now.
Despite the super high price for the headset I am still full of praise for them, traveling and working in noisy office environments has become very different. Let's see how this story develops and what I think about the headset in a couple of months.
The Computer – Levels of Understanding – And Building My Own CPU
Looking at historical computing educational kits that explain how computers work rather than 'only' how to work with and program computers I started thinking a bit about the different levels of abstraction on which people understand computers. Here's what I came up with:
The Working Level Understanding: This is how most people understand computers today. They use it as a tool and know how to work with programs that serve their needs such as word processors, spreadsheets, web browsers, etc. Most people on this level, however, know little about what's inside that notebook or smartphones and can not explain the difference between, let's say, a hard drive and RAM or even know that such things exist.
Hardware Understanding: The next level from what I can tell is knowing about the components a computer consists of such as a processor, RAM, the hard drive, etc. and what they do.
Programming: The next level is programming. One can certainly learn programming without knowing about the hardware but I guess learning about that would come in the process of learning how to program anyway.
Understanding how the individual components work: The next level is to understand how the different parts of a computer work and what they are based on, i.e. logical gates, bits and bytes to simplify it a bit. There are certainly different depths one can go into on this level as pretty much on all other levels as well. The "But How Do It Know" book I've reviewed some time ago is one of the best ways to really feel comfortable on this level.
The physics behind the gates: Next in line is to understand how gates are built, i.e understand how transistors work on how they are implemented on silicon. I liked this video on Youtube which gives a good introduction from a non-technical point of view. Obviously one can go much further here, down to the quantum level and beyond but I think the basics of this level are still understandable for somebody interested in the topic without a deep technical background.
Personally I think I have a pretty good grasp on most of these levels, at least from a high level point of view. But I decided to go a bit further about understanding how individual components work. As I said in a previous post I learned early in my career how a CPU works and what is inside. However, the control part of it always remained a bit mysterious. I wouldn't have thought it to be possible to build my own CPU before, but after reading the "How Do It Know" book plus some extra material I am sure I can pull it off, given some time and dedication. So there we go I have my new quality time project: Building my own CPU. I'll call it the Do It Yourself (DIY) CPU and will of course blog about it as things develop 🙂
Why Open Source Has Become A Must For Me
While the Internet is doubtlessly a great invention and I wouldn't want to miss it in my daily life anymore there are certainly downsides to it. Last year I summarized them in a post titled „The Anti-Freedom Side Of The Internet“. While I have found solutions for some of the issues I discussed there such as privacy issues around remotely hosted cloud services, I have touched one topic too lightly that has become much more apparent to me since then: The changing business models and interaction of software companies with their customers that is not necessarily always to the advantage of the customers compared to pre-Internet times.
In the pre-Internet times software was bought on disks or CDs and installed on a computer. For most commercial software you got a usage license with an unlimited duration and the user was in control over the software and the installation process. Fast forward to today and the model has significantly changed. Software is now downloaded over the Internet and installed. The user's control over the process and privacy is largely gone because most software now requires Internet connectivity to communicate with an activation server of some sort before it installs. While I can understand such a move from the software companies point of view I find it highly controversial from a user's point of view because there is no control what kind of information is transmitted to the software company. Also, most software today frequently 'calls home' to ask for security and feature updates for security and perhaps also for other purposes. While this is good on the one hand to protect users it is again a privacy issue because a computer frequently connects to other computers on the Internet in the background without the users knowledge, without his consent and without his insight into what is transmitted. Again, no control as to what kind of data is transmitted.
And with some software empires on the decline, a new interesting license model, not thought of in pre-Internet times, is the annual subscription model. Adobe is going down that path with Photoshop and Microsoft wants to do the same thing with their Office suite: Instead of buying a time unlimited license once, they now want to sell time limited licenses that have to be renewed once a year. Again, understandable from the software companies point of view as that ensures a steady income over the years. From a users point of view I am not really sure as that means there are yearly maintenance costs for software on computers at home that simply was not there before.
I wonder if that will actually accelerate the decline of those companies? If you buy software once you are inclined to use it as long as possible and perhaps buy an update every now and then. But if you are faced with a subscription model where you have to pay once a year to keep that software activated, I wonder if at some point people are willing to try out other alternatives. And alternatives there are such as Gimp for graphics and of course LibreOffice.
Already today I see a lot of people using LibreOffice on their PCs and Macs so that trend is definitely well underway. Perhaps it also triggered by people not only using a single device anymore which would require more than one paid license. Also, the increasing number of different file formats and versions that make sending a document for review to someone else and getting a revision that is still formatted as before it was sent a gamble, so why stick to a particular program or version of a word processor?
In other words, Open Source is the solution in a world where the Internet allows software companies to assert more control over their customers than many of them are likely to want. Good riddance.
Historical Computing And The Busch 2090 – Simulating A Newer CPU Architecture On An Old Processor
It took a while to get hold of one but I finally managed to get a 1980’s Busch 2090 microcomputer I mused about in this and other previous blog posts. What I could only read about in the manual before I could now finally try out myself on this 30 year old machine and true to the saying that when you read something you remember it but when you do something yourself you will understand, I found out quite a number of things I missed when only reading about it. So here’s the tale of working and programming a 30 year old machine that was there to teach kids and adults about how computers work rather than how to work with computers:
The 2090 is programmed on a hexadecimal keyboard (see figure on the left) in a slightly abstracted pseudo machine code. It makes a number of things easier such as querying the keyboard or to display something on the six digit 7-segment display but otherwise it looks like machine code. After doing some more research into the TMS 4 bit processor used in the 2090, I found out that it is a direct descendant of the first Texas instruments microprocessor with a few more input/output lines, RAM and ROM added. Otherwise the processor works as its predecessor, the TMS 1000 from 1972. In other words when the 2090 appeared in 1981 the processor architecture was rather dated already and much more sophisticated processors such as the Intel 8080, the Zilog Z80, the Motorola 6800 and the MOS 6502 were available. While microcomputer learning kits appearing on the market a year or two later used these or other 8 bit processors, Busch decided to use an old 4 bit architecture. I can only speculate why but pricing was perhaps the deciding factor.
Some research on the net revealed some more material about the CPU and other chips used. The manuals of the TMS 1000 architecture that also includes later versions such as the 1400 and 1600 can be found here and here. These documents are quite fascinating from a number of perspectives as they go into details on the architecture and the instruction set and also give an interesting impression of how what we call ’embedded computing systems’ today were programmed in the 1970s. Simulators were used to test the program which were then included in a RAM on the CPU chip as part of the production. No way to change it later on so it better had to be perfect during the production run.
What surprised me most when studying the hardware architecture and instruction set is that it is very different from the pseudo machine code presented to the user. My impression is that the pseudo machine code was very much inspired by newer processor architectures with a lot of registers and a combined instruction and data RAM residing in a separate chip. The TMS 1600, however, has nothing of the sort. Instructions and data are separate in the chip, all ‘real’ machine code instructions are in a ROM that is accessed via an instruction bus that is separate from the bus over which the built-in memory is accessed.
While the pseudo machine code uses 16 registers, the processor itself only has an accumulator register. The 16 registers are simulated by using the 64 x 4 bit RAM of the TMS 1600, which, on the real machine, is accessed as RAM over an address bus and not as registers. In addition, the processor chip as no external bus to connect to external memory. There are input and output lines but their primary purpose is not to act as a bus system. The 2090 however uses an external 1 kbyte RAM that is accessed via those input output lines. In effect, the small operating system simulates an external bus to that memory chip in which the pseudo machine code the user typed in resides. Very impressive!
There are a number of support chips on the board used for purposes such as multiplexing different hardware units such as the keyboard, the display, LEDs, input connectors and the memory on the input/output lines. As the input/output lines are separate on the chip and do not work like a bi-directional bus, one of the support chips offers tri-state capability for some hardware so it can be removed from the bus.
The TMS 1600 also has no stack as we know it today. Instead it has 3 subroutine return registers so up to three subroutines can called at any one time. This is already an improvement over the original TMS 1000 which only had one such register. Another interesting fact is that the TMS 1600 doesn’t have instructions for integer multiplication and division.
Apart from the accumulator register there are the x- and y-registers. These registers, however are used to address the data ram. A separate 6 bit program counter is used to address the ROM. While the pseudo machine code uses a zero flag and a carry flag, something that is part of all popular 8 bit microprocessor architectures even today, there are no such flags in the TMS 1600. Instead, there’s only a status register that acts as a carry and zero flag depending on the operation performed. Also, the processor doesn’t have the capability for indirect or indexed addressing.
Also quite surprising was that there are no binary logic instructions such as AND, OR, XOR etc. in the CPUs instruction. Therefore, these have to be simulated for the pseudo machine code which contains such commands, again resembling the instruction set of other ‘real’ CPUs at the time.
And another nifty detail are the two different kinds of output lines. There are 13 R-output lines that can be freely programmed and some of them are used in the 2090 e.g. for addressing the RAM chip (address bus) and some for writing 4 bit values to the RAM (data bus). In addition there are 8 O-outputs that can’t be freely programmed. Instead they are set via a 5 bit to 8 bit code converter and the conversion table was part of the custom CPU programming. From today’s perspective it’s incredible to see to what lengths they went to reduce the circuit logic complexity. So a 5 bit to 8 bit code converter, what could that be good for? One quite practical application is to illuminate the digits of a 7-segment display. As only one digit of the 6 digit display can be accessed at a time it’s likely that the 8 O-outputs are not only used for addressing the RAM but also to select one of the six numbers of the display.
Virtually traveling back in time and seeing a CPU like this in action rather than just reading about it is incredible. I now understand much better how the CPU architecture we still use today came to be and how limitations were lifted over time. An incredible journey that has led me to a number of other ideas and experiments as you shall read here soon.