Book Review – Where Wizzards Stay Up Late

wizzardsOn I go in my quest to learn more about the history of computing. After visiting the 1940s and 50s in Pioneer Programmer, I jumped forward a decade and a half to learn a bit more about the origins of the Internet.

While I knew that the Internet grew out of the ARPANET I had but a fuzzy idea so far. After reading Katie Hafner and Matthew Lyon’s account “Where Wizzards Stay Up Late” about how engineers at Bolt, Beranek and Newman (BBN) turned the ideas and visions of J.C.R. Licklider and others into reality and how people who’s names are well known in the industry today such as Vint Cerf and Bob Kahn got hooked and designed TCP/IP, a lot of things have become much clearer.

An interesting side note: The book was published long long ago in 1998 but since only events up until the mid-1980’s are described the book has aged well and is as readable and interesting today as it was over 15 years ago.

Coming back to the content, I found the book is very well researched and written and it’s fun to follow the story line. One thing I got a bit frustrated about at times was that it addresses a non-technical audience and hence doesn’t really go into the technical details. Instead it often tries to describe its way around the geeky stuff. Fortunately there’s the Internet and Wikipedia so it’s easy to get the details on specific parts of the story including easy access to original documents.

In other words a perfect symbiosis of story telling and online background research. Actually, it’s an interesting recursion as I used the Internet to download the book and also for doing background research, which means that the Internet practically tells it’s own story.

A highly recommended read!

How To Move From Typepad To WordPress

In a free and open web one would expect to be able to move one’s website from one service to another without too much hassle. But unfortunately many parts of the web are neither free nor open and making an escape with a blog from Typepad to a WordPress installation requires a bit of tinkering. While there are quite a number of reports by others of how to move away from Typepad exist on the web I thought I’d add my story as well because in the end it was less complicated than I thought. Overall, it took me about one and a half days to get things done. It could have gone faster but I wanted to experiment a bit to get exactly what I wanted. Read on for the full story.

Continue reading How To Move From Typepad To WordPress

Linux And A Good Backup Strategy Save The Day

When you travel a lot, chances are good that at some point your computing hardware fails without prior notice or gets stolen. It will happen, it’s just a question when and one is better prepared for it. In fact I was prepared and it paid back handsomely when a notebook under my care was stolen last week in Paris out of a backpack in a restaurant. First question of the owner: What will they be doing to my data? Second question: What shall I do now, I can’t work without the notebook?

Answer to the first question: They won’t do anything with your data, your notebook ran Ubuntu, it was encrypted to counter exactly this scenario and your password was long and diverse enough to withstand casual and less casual brute force attacks. And besides, those people were probably just interested in the hardware anyway… So rejoice you didn’t have Windows that doesn’t encrypt anything unless you have the Pro version… Yeah!

Answer to the second question (what shall I do now): 1.: Don’t panic (I’m sure you have a towel with you) and 2.: Don’t worry, the last backup of the system partition and the data partition are only 3 days old. That’s the amount of data loss you have. And 3.: Clonezilla restores your system on a new SSD in 15 minutes. Restoring your 600 GB of data to the user partition takes a little while longer but it will be done in time for me to catch that 6 am train to Paris to deliver the 1:1 replacement (minus 3 days worth of data).

So as sad as the story is, it’s great to have a working backup strategy that gets you back up and running in 15 minutes on totally different hardware with everything (still) installed and configured like on the “old” one. Thanks Clonezilla!

No Love In Ubuntu 14.04 for OpenVPN and IPv6/UDP for Transport

On my way into IPv6 land the next stop was in OpenVPN land and to see if I could establish an OpenVPN tunnel over an IPv6 UDP connection. Note that in this scenario I only want to have IPv4 inside the tunnel as before but the tunnel itself should make use of a UDPv6 instead of a UDPv4 connection. Turns out that OpenVPN does support this in principle but one has to decide whether to start the OpenVPN server in IPv4 or IPv6 mode. Dual-stack operation is not yet included. Not ideal, but for testing purposes I would have switched to IPv6 on the server side. Unfortunately it never came to that because the OpenVPN client in Ubuntu’s network manager does not have an option in the GUI and also not in the NetworkManager configuration files to specify UDPv6 as a transport protocol. Quite a pity as it excludes everyone with a Dual-Stack Lite cable modem from installing an OpenVPN server at home and using it together with Ubuntu’s NetworkManager. Perhaps the OpenVPN client can be launched from a shell but where’s the fun in that?

Now We Can Almost Switch-Off UMTS

Now that all German network operators have switched-on VoLTE for voice services on LTE and are transitioning their subscribers to VoLTE step by step I can visualize a UMTS switch off in the mid-term quite well. Agreed, were aren’t quite there yet but the list of reasons to keep 3G running has become significantly smaller:

One major aspect will be how quickly VoLTE actually takes off and thus reduces the need to fall back to 3G during voice calls. For the moment, network operators seem to move their subscribers to VoLTE step by step, some slower, some faster. In other words, even though VoLTE is now up and running, not everyone automatically uses it.

Also, to be able to use VoLTE, one needs an LTE smartphone with an embedded VoLTE client. For the moment, only network operator device variants, at least in Germany and I suppose in the rest of Europe as well, come equipped with VoLTE capabilities. Buy the same device outside of an operator store and it won’t come with VoLTE. That will change in the future once the dust has settled a bit and device manufacturers, operating system and chipset vendors start treating VoLTE as a black box but I think that it is still some time away. Two to three years seems a realistic time frame for me until VoLTE comes out of the box in every new VoLTE device outside operator stores but that’s just gut feeling.

And once that is in place, network operators have to wait a while until the “installed based” of non-LTE and non-VoLTE devices has thinned out considerably. Telenor in Norway says it expects all of this to happen until 2020 by which they want to switch-off their 3G network. And in 2025 they want to ax their GSM network as well. The timing is a bit tight but if a network operator accepts that voice fallback of non-VoLTE devices will be to 2G without data capabilities during the call than they can certainly meet these deadlines.

How To Get That Dynamic IPv6 Address To The DNS Server

In a previous post on my first “IPv6 call” to a service at home I reported about the pains of finding a Dynamic DNS provider that supports IPv6 and supports it with a Time To Live (TTL) value of 60 seconds. The later part is important as this is the time the domain is not reachable in the worst case when the IPv6 address changes. For the moment my solution is to use a CNAME entry in my domain name I host at noip.com to another domain name at dynv6.com. For details see the previous post. The next challenge now was to find out how to automatically update the IPv6 address at the DNS server once it changes as this does not work the same way as for IPv4 addresses.

Updating the dynamic IPv4 address at a DNS server is straight forward. Either the Network Address Translation (NAT) router already has the functionality to send an update message to the dynamic DNS server or a simple http(s) request from any machine behind the NAT with some authentication information in the URL does the trick. No IP address is required in the request as the DNS server gets the request from the backhaul facing public IP address and just takes the value from layer 3. With IPv6 this doesn’t work for two reasons:

There is no NAT anymore so each server behind the router has its own public IPv6 address and thus has to send the request by itself. O.k., so far, no big deal. The next problem is that the server can have more than one public IPv6 address. While the public prefix for these IPv6 addresses are the same, there can be different interface IDs in case IPv6 privacy extensions are enabled. On a server, privacy extensions are not necessary but Ubuntu server seems to have it switched on by default. Also, for a little while there can be deprecated IPv6 addresses in the address list with a prefix that is no longer valid. To get the IPv6 address across to the dynamic DNS server one could do as before and just send an http(s) request with a token if the URL is only reachable via IPv6. The problem with this approach is that the update domains for the service I tried to not support that, they are both IPv4 and IPv6 reachable.

Another problem occurs in case privacy extensions are used on the server as described before. In this case the interface ID will be randomly assigned for one of the IPv6 addresses bound to the interface. While the dynamic DNS server won’t mind, the router at home will because, at least my model blocks incoming IPv6 requests to all interface IDs by default and exceptions have to be statically configured. That’s a good thing but this requires that the interface ID of the server remains static. One of the two IPv6 addresses on my server fulfills this requirement as the interface ID is based on the Ethernet adapter’s MAC address so I can use this interface ID to configure the IPv6 firewall in the router. However, the request to the dynamic DNS server to update the IPv6 address does not originate from this IPv6 address but from the one with the randomly assigned interface id. Twice out of luck.

The solution to the problem is to find out which global dynamic IPv6 addresses are currently auto-configured on the server and to select the IPv6 address form the list that uses the MAC address as interface ID in the IPv6 address. The easiest way to do this I have found so far is a great shell script dynv6.com offers for the purpose that can be found here. For my purposes I’ve made two changes:

First, I adapted the command that finds the current IPv6 address. The original script does not take IPv6 privacy extensions into account which can be fixed by modifying the line as follows:

address=$(ip -6 addr list scope global $device | grep -v ” fd” | grep “global dynamic” | sed -n ‘s/.*inet6 ([0-9a-f:]+).*/1/p’ | head -n 1)

And the second change I’ve done in the script is to use https instead of http to actually run the queries as dynv6 supports https for the updates as well. No need to spread the word about my id token in plain http requests. The beauty of the script is that it checks if the IPv6 address has changed and only sends an update if it has. This way I can call the script with a cron job once a minute without sending and update to dynv6.com every time.

There we go, this problem is now solved as well!

 

Why I Left Typepad For A Self-Administrated WordPress Blog

Welcome, this is the first ‘original’ post on WirelessMoves’ new platform. I’ve been a loyal Typepad customer for 10 years but there were a number of reasons that accumulated over time that finally made me finally switch to a self-installed and administrated WordPress instance in the cloud. In case you are interested in the details of why I switched, read on.

One thing that has bugged me for many years is that my $50 per year account at Typepad would not allow me to use my own domain name. I could have had my own domain linked to Typepad, of course, but after a few years without it it wasn’t appealing anymore to retrofit this later. Also, pricing for my own domain wasn’t that appealing either.

Next, there’s no way around the fact that my blog in 2015 still looked almost identical to how it looked like a decade ago. What was slick and modern at the time looks a bit rusty today, the world wide web and design has significantly moved on over time. Also, a mobile friendly design is a must have today and Typepad didn’t offer an answer for me here, either. In other words, Typepad seems to be pretty much in a maintenance only mode rather than trying to continue offering an appealing platform for content creators. Over the years the platform seems to have changed hands a couple of times and the current owner seems to have no intention of changing this sad fact.

On the technical side a number of gripes have accumulated as well. There’s no  IPv6 and, even worse, there is no secure http, not even in the writer’s user interface. While the log-in procedure is protected by https, the platform immediately falls back to http. Especially when using public Wi-Fi hotspots and other non-secure places this is a significant problems as the browser cookie giving me editing rights can be easily intercepted. Obviously I’m always using a VPN whenever I’m not at home but it should be in Typepad’s own interest to keep their customers safe.

Next in the list of things I really would have liked to have had is internal statistics about what is read on my blog beyond Typepad’s meager info of how many pages have been accessed per day. I did use an external service for this purpose for many years but it shouldn’t really have been necessary. Also, Typepad embedded Google Analytics in my blog without my consent for their own tracking purposes. And finally Typepad never offered a public search functionality for my blog. Sure, you can use Google or another search engine for the purpose but, again, it should be part of the platform.

So here we go, that’s the list and it makes me wonder why it took me so long to make the switch!? A self-administered WordPress installation fortunately offered a solution to each and every one of these issues when coupled with the right hosting platform, especially when it comes to IPv6 and https. In a previous post, I wrote about the cool features of Uberspace’s hosting platform and this is where I migrated my blog to. The domain name is in my hands, WordPress is open source and should I decide in the future that I don’t like it there anymore I’m free to go instantly.

Unfortunately, Typepad doesn’t make transferring a blog to another service exactly easy but I got it done in a day and a half. More about that in a follow up post.

32C3 – An Angel In Retrospect – Being Part Of The Conference Instead Of Just Attending

And here is one more post about my 32C3 experience at the end of last year in Hamburg. This was the first conference I did not only “attend” but was actually “a part of”. There is a big difference between the to approaches: Normal conferences are fully organized and you go there to listen to the talks, to meet and talk to people you already know and perhaps, if you are the communicative type, meet a few new people that share your interests. The annual CCC conferences are different in this respect because here attendees are encouraged to help with many different aspects of the conference from checking tickets at the entrance, be part of the wardrobe team, become a camera man, help people find their way around the congress, help people with their network problems, etc. etc.

One the one hand this helps to keep the ticket prices down because the 1.500 volunteers who signed up as congress “angels” put in 10.000 work hours and all of them did it voluntarily and for free. That saves a lot of money. Like me, many might not only have altogether altruistic motives to volunteer. Apart from being happy to help I became a congress angel to get a glimpse of how and by whom the event is organized and how things work behind the scenes. I signed up for a couple of camera shifts and in addition spent some time at the network help desk. Not only did I learn a lot about how the congress is run but I also met a lot of people during my network help desk shifts, both people seeking help and other network angels in the same shifts, who freely shared their ideas on the stuff they were having fun with during the less busy times (after all this was a hacker conference so there weren’t too many people who had network issues with their equipment they couldn’t figure out themselves). If I had just “attended” the congress I would have never met all these people and it wouldn’t have been half the fun it was! In other words, I’m fully hooked on the concept!

The crucial thing about becoming an angel at the congress and volunteering is that there is a system that makes it easy and flexible in the extreme. The major idea is that one is not assigned to do something but that one has complete control over what one wants to do and when. The place where work and volunteers come together is the web based “angel-system” that works equally well on big and small devices. Here, one can pick tasks and 2 hour timeslots before an during the conference that fit into one’s overall schedule. I took camera shifts for presentations I wanted to attend anyway and network help desk duties at times in which there was no talk I wanted to go. During the congress my plans changed slightly and I could re-arrange my shifts in the “angel-system” in a jiffy from my smartphone. A great system that gives the conference the volunteers it needs and the volunteers the freedom to assign themselves tasks to do and be in control. Wonderful!

I’m totally hooked on the concept and I feel encouraged to be part of the event even more next time rather than just attending. So if you plan to come to a CCC congress in the future, sign up as an “angel” before you arrive and have more fun!

LTE-A Pro for Public Safety Services – Part 3 – The Challenges

In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I’ve described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal. On the other hand there is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won’t be a silver bullet for the future of PMR services:

Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It’s unlikely this will significantly change in the future.

Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn’t stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don’t like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes ‘end of life’ and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.

Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.

Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.

Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.

ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.

Network Operation Models

As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here’s a comparisons:

A Dedicated PMR Network

  • Nationwide network coverage requires a significant number of base stations and it might be difficult to find enough and suitable sites for the base stations. In many cases, base station sites can be shared with commercial network operators but often enough, masts are already used by equipment of several network operators and there is no more space for dedicated PMR infrastructure.
  • From a monetary point of view it is probably much more expensive to run a dedicated PMR network than to use the infrastructure of a commercial network. Also, initial deployment is much slower as no equipment that is already installed can be reused.
  • Dedicated PMR networks would likely require dedicated spectrum as commercial networks would probably not give back any spectrum they own so PMR networks could use the same bands to make their devices cheaper. This in turn would mean that devices would have to support a dedicated frequency band which would make them more expensive. From what I can tell this is what has been chosen in the US with LTE band 14 for exclusive use by a PMR network. LTE band 14 is adjacent to LTE band 13 but still, devices supporting that band might need special filters and RF front-ends to support that frequency range.

A Commercial Network Is Enhanced For PMR

  • High Network Quality Requirements: PMR networks require good network coverage, high capacity and high availability. Also due to security concerns and fast turn-around time requirements when a network problem occurs, local network management is a must. This is typically only done anymore by high quality networks rather than networks that focus on budget rather than quality.
  • Challenges When Upgrading The Network: High quality network operators are also keen to introduce new features to stay competitive (e.g. higher carrier aggregation, traffic management, new algorithms in the network) which is likely to be hindered significantly in case the contract with the PMR user requires the network operator to seek consent before doing network upgrades.
  • Dragging PMR Along For Its Own Good: Looking at it from a different point of view it might be beneficial for PMR users to be piggybacked onto a commercial network as this ‘forces’ them through continuous hardware and software updates for their own good. The question is how much drag PMR inflicts on the commercial network and if it can remain competitive when slowed down by PMR quality, stability and maturity requirements. One thing that might help is that PMR applications could and should run on their own IMS core and that there are relatively few dependencies down into the network stack. This could allow commercial networks to evolve as required due to competition and advancement in technology while evolving PMR applications on dedicated and independent core network equipment. Any commercial network operator seriously considering taking on PMR organizations should seriously investigate this impact on their network evolution and assess if the additional income to host this service is worth it.

So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.

Uberspace, IPv6, Let’s Encrypt, Domain Names, etc. etc.

Over the weekend I wanted to set up a cloud based project management software and after my default web hoster failed miserably, I took the opportunity to try a new hosting company I heard of some time ago called Uberspace. This post is probably only interesting for German speakers because they only have a German web presence, sorry about that. So you might wonder why I am reviewing a web hoster, that's quite out of the ordinary for this blog!? Right, but this web hoster is too!

Unlike other hosting services that I've been using for over a decade that have become big behemoths that are now only interested in the masses and offer a very limited feature set and not a tiny bit more, Uberspace offers a lot of features, an online documentation that is very nerdy and fun to read and they offer maximum freedom for my web hosting requirements. For starters, they don't want money up front, you can try for free for a month. If you walk away during that time the account is simply deleted, no questions asked. If you want to stay around you decide how much you want to pay per month. They give some guidance of what they think this should be (€5) and give a detailed overview of their own costs from power consumption to hardware purchases. I like transparency and details! They also point out that in case you are cash starved you can also pay less. Paying more is also possible of course. A wonderful approach that seems to work, they've been around for a while.

Apart from the easy signup process all the rest is pretty much straight forward as well. After FTPing the project manangement software to my virtual web server and creating a mySQL database via mySQLAdmin web frontend I could immediately start working with it and could access it over both HTTP and HTTPS with the default domain name given to my account. Adding my own domains for the web space is simple as well, a simple command in the shell and it is done. Afterward, the IPv4 and (optionally) the IPv6 address of the site needs to be provisioned in the DNS server which by the way they don't provide so you can and have to bring your own domain names. It worked like a charm both for IPv4 and IPv6. Wonderful!

To use HTTPS with my own domain name an SSL certificate is required. Uberspace offers two ways of doing this. The old fashioned way is to get an SSL certificate somewhere and then to import the certificate and key files to the web space. The cool way to do it since last December is to use Let's Encrypt and Uberspace is probably one of the first web hosters that has integrated Let's Encrypt. It took about two minutes and three commands on the shell to request the generation and installation of the certificate. It was so simple I couldn't believe it until I checked that the Let's Encrypt certificate is actually used when I browsed to my site. Awesome!

Freedom, IPv6, Let's Encrypt, a great nerdy online documentation and my website was up and running with my own domain and https in less than an hour, Uberspace certainly got me hooked!