Upgrading Ubuntu With Minimal Downtime And A Fallback Option

When it comes to my notebook that I use around 25 hours per day I'm in a bit of a predicament. On the one hand it must be stable and ultra reliable. That means I don't install software on it I don't really need and resort to virtual machines to do such things. On the other hand, however, I also like new features of the OS which means I had to upgrade my Ubuntu 12.04 LTS to 14.04 LTS at some point. But how can that be done with minimal downtime and without running the risk of embarking on lengthy fixing sessions after the upgrade and potentially having to find workarounds for things that don't work anymore!?

When I recently upgraded from a 512 GB SSD to a 1 TB SSD and got rid of my Truecrypt partitions a few weeks ago I laid the foundation for just such a pain free OS update. The cornerstone was to have an OS partition that is separate from the data partition. This way, I was now able to quickly create a backup of the OS partition with Clonezilla and restore the backup to a spare hard drive in a spare computer. And thanks to Ubuntu, the clone of my OS partition runs perfectly even on different hardware. And quick in this case really means quick. While my OS partition has a size of 120 GB, only 15 GB is used so the backup takes around 12 minutes. In other words, the downtime of my notebook at this point for the upgrade was 12 minutes. Restoring the backup on the other PC took around 8 minutes.

On this separate PC I could then upgrade my cloned OS partition to Ubuntu 14.04, sort out small itches and ensure that everything is still working. As expected, a couple of things broke. My MoinMoin Wiki installation got a bit messed up in the process, Wi-Fi suspend/resume with my access point also got a bit bruised but everything else worked just as it should.

Once I was satisfied that everything was working as it should I used Clonezilla again to create a backup of the cloned OS partition and then restored this to my production notebook. Another 12 minute outage plus an additional 3 minutes to restore the boot loader with a "Boot Repair" USB stick as my older Clonezilla version could not restore a Ubuntu 14.04 Grub boot loader installation after the restore process.

And that's it, Ubuntu 14.04 is now up and running on my production PC with as little as two 12 minute outages. In addition, I could try everything at length before I committed the upgrade and I still have the backup of the 12.04 installation that I could restore in 12 minutes should the worst happen and I discover a showstopper down the road.

So was it worth all the hassle other than being able to boast that I have 14.04 up and running now? Yes I think it has and here's a list of things that I have significantly improved for my everyday use:

  • Video playpack is smoother now (no occasional vertial shear anymore)
  • The dock shows names of all LibreOffice Documents now
  • Newer Virtualbox, seems to be faster (graphics, windows, etc.)
  • MTP of more phones recognized
  • Can be booted with external monitor connected without issues
  • Nicer fonts in Wine Apps (Word, etc.)
  • Nicer animations/lock screen
  • Updated Libreoffice, improved .doc and .docx support
  • The 5 years support period starts from 2014
  • Better position to upgrade in 2 years to 16.04
  • Menus in header save space
  • VLC has more graphical elements now

Walking Down Memory Lane – 10 Years Ago, My First 3G Mobile

V800-1Is 10 years a long or a short timeframe? Depends, and when I think back to my first UMTS mobile that I bought 10 years ago on this day (I checked), the timeframe seems both long and short at the same time. It seems like eternity from an image quality point of view as is pretty much visible in the first picture on the left which is the first picture I took with my first UMTS phone, a Sony Ericsson V800 – Vodafone edition. Some of you might see another UMTS phone on the table, a Nokia 6630 which was a company phone so that doesn't count.

On the other hand, 10 years is not such a long time when you think about how far the mobile industry has come since. Back in 2004 I had trouble finding UMTS network coverage as mostly only bigger cities (population > 500.000 people perhaps) had 3G coverage at the time. Back in 2004, that first UMTS phone was still limited to 384 kbit/s, no HSDPA, no dual-carrier, just a plain DCH. But it was furiously fast for the time, the color display was so much better than anything I had before and the rotating camera in the hinge was a real design highlight. Today, 10 years later, there's almost nationwide 3G and even better LTE coverage, speeds in the double digit megabit/s range are common and screen size, UI speed, storage capacity and camera capabilities are orders of magnitude better than at that time.

Even more amazing is that at the time, people in 3GPP were already thinking about the next step. HSDPA was not yet deployed in 2004 but already standardized and meetings were already held to define the LTE we are using today. Just to get you in the mindset of 2004, here are two statements from the September 2004 "Long Term Evolution" meeting in Toronto Canada:

  • Bring your Wi-Fi cards
  • GSM is available in Toronto

In other words, built-in Wi-Fi connectivity in notebooks was not yet the norm and it was still not certain to get GSM coverage in places were 3GPP went. Note, it was GSM, not even UMTS…

I was certainly by no means a technology laggard at the time, so I can very well imagine that many delegates attending the Long Term Evolution meeting in 2004 still had a GSM-only device that could do voice and sms, but not much more. And still, they were laying the groundwork for LTE that was so far away from the reality at the time that it almost seems like a miracle.

3-generations-mobileI close for today with the second image on the left, that shows my first privately owned GSM phone from 1999, a Bosch 738, my first UMTS phone from 2004 and my first LTE phone, a Samsung Galaxy S4 from 2014 (again, I had LTE devices for/from work before but this is the first LTE device I bought for private use). 15 years of mobile development side by side.

Some Musings About LTE on Band 3 (1800 MHz)

It's 2014 and there is no doubt that LTE on Band 3 (1800 MHz) has become very successful and the Global mobile Supplier's Association (GSA) even states that "1800 MHz [is the] Prime Band for LTE Deployments Worldwide". When looking back 5 years to 2009/2010 when first network operators began deploying LTE networks, this was far from certain.

Quite the contrary, deploying LTE in 1800 MHz was seen by many I talked to at the time as a bit of a gamble. At the time, the general thinking, for example in Germany, was more focused on 800 MHz (band 20) and 2600 MHz (band 7) deployments. But as the GSA's statement shows, that the gamble has paid out. Range is said to be much better compared to band 7 so operators who went for this band in auctions or could re-farm it from spectrum they already had for GSM have an interesting advantage today over those who need to use the 2600 MHz band to increase their transmission speeds beyond the capabilities of their 10 MHz channels in the 800 MHz band.

To me, an interesting reminder that the future is far from predictable…

Smartphone Firmware Sizes Rival Those Of Desktop PCs Now

Here's the number game of the day: When I recently installed Ubuntu on a PC I noticed that the complete package that installs everything from the OS to the Office Suite has a size of 1.1 GB. When looking at firmware images of current smartphones I was quite surprised that the images are at least the same size or are even bigger!

If you want to see for yourself, search for "<smartphone name> stock firmware image" on the net and see for yourself. Incredible, there's as much software on mobile devices now as there is on PCs!

A lot of it must be crap- and bloatware, though, because Cyanogen firmware images have a size of around 250 MB. Add to that around 100 MB for a number of Google apps that need to be installed separately and you are still only at about a third of a manufacturer's stock firmware image size.

Check The Hotel’s Wi-Fi Speed Before Reserving

Whenever I make a hotel reservation these days I can't help but wondering how good their Wi-Fi actually is or if it works at all. Most of the time I don't care because I can use my mobile data allowance anywhere in Europe these days. Outside of Europe, however, it's a different story as it's more expensive so I still do care. Recently I came across HotelWifiTest, a website that focuses on data rates of hotel Wi-Fis based on hotel guests using the site's speed tester. Sounds like an interesting concept and it's promised good speeds for the next hotel I'm going to visit. So let's see…

A Capacity Comparison between LTE-Advanced CA and UMTS In Operational Networks Today

With LTE-Advanced Carrier Aggregation being deployed in 2014 it recently struck me that there's a big difference in deployed capacity between LTE and UMTS now. Most network operators have had two 5 MHz carriers deployed for quite a number of years now in busy areas. In some countries, some carriers have more spectrum and have thus deployed three 5 MHz carriers. I'd say that's rather the exception, though. On the LTE side, carriers with enough spectrum have deployed two 20 MHz carriers in busy areas and can easily extend that with additional spectrum in their possession as required. That's also a bit of an exception and I estimate that most carriers have deployed between 10 and 30 MHz today. In other words it's 15 MHz UMTS compared to 40 MHz LTE. Quite a difference and the gap is widening.

Pushing My VPN Gateway Speed to 20 Mbit/s With A BananaPi

20Mbit-s-speedTo secure my fixed and mobile data transfers I've been using OpenVPN for many years know. With fixed and mobile networks becoming faster I have to continuously improve my setup as well to make maximum use of the available speed at the access. At the moment, my limit at the server side is 30 Mbit/s while at the access side, my Wi-Fi to VPN Gateway's limit is 10 Mbit/s. Time to change that.

A quick recap of what happened so far: Earlier this year I moved from an OpenVPN server on an OpenWRT Wi-Fi Router to an OpenVPN Server running on a Raspberry. At the time my VDSL uplink of 5 Mbit/s was the limit. With that limit removed the next limit was the processing capacity of the RaspberryPi which limited the tunnel to 10 Mbit/s. The logical next step was to move to a BananaPi who's limit with OpenVPN is around 30 Mbit/s.

In many cases I was still limited to 10 Mbit/s, however, as I was using a Raspberry Pi as a Wi-Fi / VPN Client Gateway to tunnel the data traffic of many Wi-Fi devices through a single tunnel. For details see this blog entry and the Wiki and Code for this project on Github. To move beyond the 10 Mbit/s, I had to upgrade the hardware on this side to a BananaPi as well. The process is almost straight forward because I run Lubuntu 14.04 on the BananaPi which, like Raspian running on the BananaPi, is based on Debian Wheezy. With a few adaptations the script I put together for the RasbperryPi also runs on the BananaPi and converts it to an OpenVPN client gateway in a couple of minutes.

While I expected to see a throughput of 30 Mbit/s, the link between the two BananaPi levels out at 'only' around 20 Mbit/s as shown in the screenshot on the left. I haven't yet found out why this is the case as on both devices, processor load is around 65%, so there are ample reserves left to go faster. For the moment I ran out of ideas what it could be. However, doubling the speed with this step is not too bad either.

From Half a Million to A Billion – Size of Mobile Network Operators

Just a quick post today because I struck me today what a wide difference in size there is today between network operators. On the high end of the scale there are network operator organizations that serve countries with a population of over a billion, i.e. 1000 million people and have a significant market share. And on the other end of the spectrum there are countries, yes, independent countries, with just half a million inhabitants in Europe, i.e. countries that are much smaller than even only a single mid-sized city in bigger countries.

In other words, even if one of the network operators in such a country is dominant, it doesn't have more than a view hundred thousand subscribers. Between 1000 million and less than a million are 3 orders of magnitude! Breathtaking that the way mobile networks are built and operated works on both ends of the spectrum and that there doesn't necessarily seem to be a sweet spot at some point in between for an ideal network and organization size.

Fiber Connectivity in Paris – Some Images

A few weeks ago I've reported about my stellar speed experience with fiber connectivity in Paris with downlink and uplink speeds of 264 Mbit/s and 48 Mbit/s respectively. Today, I've got a follow up with a couple of pictures and technical background information.

Fiber and Copper in the Apartment

1 - router and fiber ONTLet's start at the end of the fiber. The first pictures shows two boxes stacked on top of each other. The bigger one below is a standard Wi-Fi access point with router functionality which is connected to the small box via an Ethernet cable. The small box is a fiber to copper converter. The green cable going into the small box is the fiber cable. The small box gets pretty warm so it's save to assume it takes more than the 2.5 Watts of a Raspberry Pi…

2 - ONT close-upThe second picture is a close-up of the of the fiber to copper converter, the Optical Network Terminal (ONT). The Ethernet cable on the left is connected to the bigger Wi-Fi network box shown in the first picture. The optical cable with the green connector on the right goes to the next box in the apartment shown in picture 3. As no power is delivered to this box it must be a passive component that connects the more sturdy optical cable coming into the apartment to the more flexible optical cable with the green connectors.

3 - fiber to fiberAnd that's it as far as the equipment in the apartment is concerned. The fourth picture shows how the optical cable gets into the apartment via a crudely drilled hole that was filled with some glue afterward. Not quite a work of art to say the least.

Yes, it's GPON!

4 - entrySo what kind of fiber technology is used for this line? The model number on the fiber to copper converter on picture 2 (I-010G-Q) gives the first clue that Google translates into a number of interesting links to follow. The most interesting one is to lafibre.info which contains lots of pictures of how the outdoor part of fiber networks are installed in France. The Google search for the model number also led me to a pretty interesting document from Alcatel Lucent which details their Gigabit Passive Optical Network (GPON) components and network setups on 250+ pages. So there we go, the I-010G-Q is part of a GPON installation. 2.4 Gbit/s in the downlink and 1.2 Gbit/s in the uplink direction that is shared between all installations behind one fiber strand that is split close to an apartment building into separate strands, one for each customer.

From an evolution point of view the document's 2010 creation date is also interesting. In other words GPON is well in it's 4th year of deployment now and has come nowhere near capacity issues so far. And it's unlikely to happen anytime soon, i.e. there's no immediate need to beef up the specs to make it even faster. The challenge with GPON rather is, that optical cables need to go into buildings and from there to apartments to deliver speeds in the Gbit/s range. And that certainly comes at a price.

The Next Step In LTE Carrier Aggregation: 3 Bands

The hot LTE topic of 2014 that made it into live networks certainly is Carrier Aggregation (CA). Agreed, there aren't too many devices that support CA at the end of 2014 but that's going to change soon. In the US, quite a number of carriers have deployed 10 + 10 MHz Carrier Aggregation to play catch up with the 20 MHz carriers used in Europe already.  In Europe, network operators will use 10 MHz + 20 MHz aggregations and some even 20  + 20 MHz for a stunning theoretical peak data rate of 300 Mbit/s. So where do we go from here? Obviously, aggregating 3 bands is the next logical step.

And it seems 3GPP is quite prepared for it. Have a look at this page which has an impressive list of all sorts of LTE carrier aggregation combinations and also shows for each in which 3GPP spec version it was introduced in the specification.

For Europe, especially the 3A_7A_20A combination (20 + 20 + 10 MHz) is interesting as there are network operators that have spectrum in each of these bands. Peak data rates with 50 MHz of downlink spectrum, which some network operators actually own, would be 375 Mbit/s.

For North America, there are literally dozens of potential combinations listed. Not sure which ones might actually be used. But I suspect it will be difficult to come up with 50 MHz of total aggregated bandwidth in this region, so Europe will continue to have an edge when it comes to speed.