RSync for Backing Up My Owncloud

I like to have a plan B so I regularly backup my Owncloud document folder to an external storage device and, in addition, to another Owncloud installation running on a Raspberry Pi so I can active this instance should my main Owncloud installation ever fail while I'm not at home. So far, I've always copied over the complete document folder to the Raspi which takes quite a while as it contains several gigabytes of data. Recently, however, I decided to have a closer look at the rsync command and noticed that it would be ideal to speed up the process as it can compare source and destination and only copies the parts of files that have been modified. Here's the command I put together after reading a couple of "how to's" that exactly fits my needs:

rsync -avzh –rsync-path="sudo rsync" /media/owncloud-drive/data/ pi@192.168.42.3:/media/owncloud-drive/data/ –progress –delete

Looks a bit complicated but it's pretty much straight forward:

  • -avzh are the default options to use rsync in "a" = archive mode which goes through the directory recursively and preserves permissions, time stamps and access rights. 'v' stands for verbose output, 'z' for compressing data before transmission, and 'h' for human readable format.
  • –rsync-path is used to run the rsync instansce on the remote RaspberryPi with admin rights which are required to copy the owncloud folder that needs to be accessible from the "www-data" account that is used by the web server.
  • /media/owncloud-drive/data/ is the path to the local owncloud data folder that is to be copied to the destination.
  • pi@192.168.42.3:… is the account, IP address and path of the remote device to which the data shall be copied.
  • –progress, as you might imagine gives more details while the command is running.
  • –delete allows rsync to delete all files at the destination which no longer exist on the source.

One shouldn't be adventurous when it comes to backups but since this is still in the test phase I ran the rsync command with Apache being shut down on the target but not on the source server. So in theory Owncloud could write to the log or the sqlite database file just at the moment the modified part of the database file is copied over and thus corrupting the destination database file. I've ran the command many times over several days now, and so far I had no issues from not shutting down Owncloud on the source server during the process. Maybe I was just lucky so far or maybe it's no problem at all, I'm not sure yet. But I'll keep you posted.

My Personal Technology Highlights in 2014

Another year is drawing to a close and as in the years before I wondered what has happened during the year. And as always I was quite surprised when I went through my blog entries of the past 12 months about the amount. So here are my personal technology highlights of 2014:

LTE and Affordable Worldwide Internet access

Last year was the year when roaming in Europe finally became affordable. But that was nothing compared to what has happened in 2014. In July, I switched to a mobile contract that removed roaming charges for voice, data and SMS in Europe for 5 Euros extra a month. In addition, many network operators have now started to roll out LTE roaming and I had my first European and intercontinental LTE roaming experiences. And on top of that, my network operator of choice decided to apply the former EU data roaming rates to the rest of the world, thus enabling truly affordable global Internet access roaming. I've used it in China and the US during the year and it worked perfectly. On the technology side, I've also mused about data roaming costs from a technical point of view.

3rd Edition of my Book on Mobile Networks Gets Published

About 10 years ago the first edition of my book on mobile networks got published. Needless to say that over the years many things have changed and new technologies have appeared on the scene. I thus always kept updating the manuscript and 2014 saw the publication of the 3rd edition of 'From GSM to LTE-Advanced – An Introduction to Mobile Networks and Mobile Broadband'.

Network Function Virtualization

In the making for a number of years now, the standardization and discussion around Network Function Virtualization is taking on shape. Having used virtualization on the desktop for quite some time now to do things like locking up Windows in a virtual machine I decided it was time to write an NFV primer. You can find the result here.

CyanogenMod, Root Access and 'Smartphones are PCs now'

Last year came the end of Symbian for me and I've been struggling since then to get my privacy back, i.e. to make Android stop talking to Google and others all the time. A fist step to this goal was to switch to CyanogenMod which brought some disadvantages but opened up a whole new world for me. With CyanogenMod and root access, smartphones really started to feel like computers to me new and I wrote a long blog entry about the next revolution in computing based on those experiences. From a practical point of view I figured out how to stop my smartphone and other devices from contacting Google and advertisers all the time to regain my privacy and to bring pleasure back to web surfing on mobile. In September I automated the blocking list update process and put the details on Github so others could benefit as well.

Security and Privacy

Like last year, security and privacy have remained important topics for me, as Edward Snowden's revelations on the scope and depth of mass surveillance continues to baffle me. 'Raising The Shields' has been my motto since and I've put together a number of things to encrypt as much of my communication as possible. With a Raspberry Pi I've put together a security gateway for VNC remote screen sessions and to encrypt both legs of the connection by using SSH tunnels. Another Raspberry Pi and a Banana Pi for performance reasons have since been put into use as OpenVPN servers. And to encrypt all my Internet traffic when I'm in public places such as hotels, I've put together scripts and configuration files to configure a Raspberry Pi as an OpenVPN client and Wi-Fi access point. The scripts and configuration files are on Github for those of you with similar needs.

2014 has also been the year of massive security issues.  Heartbleed is the one that will probably be remembered best and I had a posts about whether my Raspberry servers were vulnerable and the extend of just how bad this discovery was and wondered why nobody discussed the NSA's denial that it didn't know about this and what this would mean if it was actually true.

Over summer break, I decided to have a closer look at how assisted GPS works and found out that SUPL, one of the protocols used by some mobile chipsets reaveals my identity and location to Google every time I fire up the GPS chip. For those of you who care about the details I had a blog posts with further technical details here and how to trace a SUPL request here. But even if assisted GPS is switched off, it's still not easy to hide your location, even if a VPN is used as described here.

I was probably not the only one who was shocked to hear that whoever was behind Truecrypt decided to abandon the project as I've been using the software on many devices. Some projects followed to review the source code and to see if someone else could continue to maintain it. I'm not sure how that turned out because I decided to switch to dm-crypt (details here and here) which is truly open source and, from what I can tell, peer reviewed.

Owncloud and How to Enable Everyone

Last year I started to use Owncloud to host my own 'cloud services' at home and this has given rise to a number of interesting thoughts and projects. In 2014, I migrated my Owncloud installation to a NUC for higher performance. One problem with Owncloud is that it requires quite a bit of technical knowledge to get going. In other words, it's something for the nerds as setting up Dynamic DNS, configuring port forwarding, getting an SSL certificate and struggling with Internet lines at home without public IP addresses is not everybody's cup of tea. So over the course of the year I put together the pieces of the puzzle and came up with an idea of how to 'home cloud enable' everyone to keep private data private.

Open Source – The Joy of Fixing It Yourself

2014 was also the year I got rid of Windows at home. All computing devices in the household are running on a Linux distribution now and Windows is banned into Virtual Machines and as an alternate OS on a single machine for those very few occasions for which a Windows running on bare metal is required. Over the year I have booted to Windows at home perhaps twice.

Open source is great because you can fix things yourself. To that end I've reported a number of Owncloud issues on their Github presence, I supplied code to extend the Selfoss RSS Server/Reader platform with new functionality I wanted to have and I set up two projects on my own on Github (The VPN Wi-Fi Access Point and the stuff required for privacy on CyanogenMod described above).

And finally on the programming side, open source has helped me a lot to better understand the fabrics of the web. As part of this I worked through a book about PHP and mySQL as sometimes books still trump online research and implemented a private database application with a web frontend. As this was so much fun I used my new knowledge to put together an automated system for testing the reliability of the Wi-Fi and cellular connectivity of mobile devices with a web based interface.

Fiber Connectivity

A 25 Mbit/s downlink and 5 Mbit/s uplink at home is not bad but once you've seen what a Fiber To The Home (FTTH) connection can do it seems to be slow indeed. When I benchmarked that 1 Gbit/s FTTH connection in Paris I got a sustained 260 Mbit/s in the downlink direction. Technical details and images of the installation can be found here. But while this is all nice, I wonder if fiber will become the new monopoly and if perhaps G.fast will be a remedy!? Time will tell.

From the Terabyte SSD to Vintage Computing

That 500 GB SSD I bought last year is still brand new but I managed to use all it's capacity only a year later. So I had to upgrade my notebook once again and have ended up with a 1 TB SSD. Again, I used the disruptive occasion to get rid of a couple of other limitations by creating a separate OS partition from a dm-crypted data partition which allows me to backup and restore the OS partition in a few minutes compared to the several hours required before.

Going back in time was equally exciting. At the beginning of the year I was in Silicon Valley and had some time at last to go to the Computer History Museum. Later in the year I also visited to the Heinz Nixdorf museum in Paderborn, Germany, which declares itself as the biggest computer museum in the world. And indeed, it is a museum not to be missed if one has an interest in vintage computing.

And last but not least: This year was the 10th anniversary of my first 3G mobile. It's been only 10 years but the mobile landscape has changed dramatically during this time.

I can hardly believe all of this happened in 2014. After all, the year felt so short…

31C3 This Week – Schedule And Links To Video Streams

Like every year the Chaos Communication Congress takes place in Germany between Christmas and the new year. And like very year I wished I could go there but other things once again take precedence. Next year perhaps… Anyway, as every year video streams of most sessions are available in real time and for download shortly thereafter so I will be able to at least watch the sessions I am interested in at most remotely. Take a look at the schedule, the afternoon of the first day (27th December) is especially interesting from a mobile network and device point of view with presentations of Sylvain Munaut, Tobias Engel and Karsten Nohl. Links to the video streams can be found here.

You Can’t Hide Your Location From Google With A VPN

Observable-wifis-smHere's an interesting observation I recently made when I used a VPN in a hotel and came across a website that asked for my location details in the browser. I was confident Firefox would not be able to find out where I was as I used a VPN tunnel to my gateway in Paris. I thus pressed the 'yes' button, expecting that the website would then tell me that I'm in Paris. Much to my surprise, however, it came up with my exact location. How is that possible, I thought, my IP address points to my VPN server in Paris!?

A detailed answer can be found on Firefox's Geolocation info web page here. In addition to the IP address, Firefox also gets the list of nearby Wi-Fi access points and sends that to Google's location server. At the location there were only two Wi-Fi access points in addition to my own as shown in the screenshot on the left but that's enough for Google to locate me.

Incredible on the one hand and scary on the other. It's no problem in this case as Firefox asked me for permission before sending the data to Google and the web page. But it shows how easy 'others' can pinpoint your location if they manage to get a little piece of software on any connected device you carry that has a Wi-Fi interface.

Socks and (Raspberry) Pis for Christmas

I like personal gifts for Christmas and very much appreciate self knitted socks and other self-made things. Personally, I have to admit that handcraft is not a strength of mine so I have to resort to other things. This year I think I might have the perfect personal gift, however! I can't knit socks and pullovers but I've decided to put an BananaPi based Owncloud server together for the family and configure their smartphones to talk to that server instead of Google. That should be the equivalent of least three pairs of hand made socks 🙂

Digging Is The Expensive Part – Not The Fiber

Back in the early 1980s, telecommunication was a state monopoly in pretty much all countries all over the world. Privatization in the 1990's and the resulting competition gave an incredible boost to the industry. Today we enjoy incredibly fast networks in many places, both fixed and wireless, and there is no sign that the increase in bandwidth requirements is slowing down anytime soon. We have come to a point, however, where the last mile infrastructure we have used in the last 25 years has come to its limits. Further evolution, both fixed and wireless, requires fiber links that do not only reach up to the buildings but right into the homes. The problem is, who's going to pay for it and what impact does it have on competition?

As I've ranted previously, the company that puts a fiber into peoples homes will become the telecom monopolist of the future. So while in some countries such as France, telecom companies are rushing to put fiber into the ground to be the first, companies in other countries like Germany are lacking behind. And even in France, fiber lines are mostly installed in densely populated areas, leaving more rural areas again at a disadvantage. The reason obviously is that it is expensive to put new fiber cables into homes. The point however, is, that it's not the fiber that is expensive, it's digging the trenches and the in-house installation that is required for the new connection. But why should the telecoms companies actually have to pay for the digging?

Let's have a look at roads (for cars) for example. These are built by the state, the country or the city with taxpayer money. It's critical infrastructure and so it makes sense. Telecommunication networks are also a critical infrastructure used by everyone and I guess we all agree we don't want to go back to state monopolies in this area. But how about using taxpayer's money to do the digging and put in empty tubes through which telecoms companies can then lay their fiber cables? This would give a huge boost to the digital economy and at the same time it would restore a degree of competition as it would perhaps suddenly make economical sense again to lay several fibers to a building and give people a choice again which infrastructure they want to use.

I know, I'm dreaming as this is a political decision that has not been made so far and I don't see any indication of something like that happening in the future. But one can still dream…

 

 

Upgrading Ubuntu With Minimal Downtime And A Fallback Option

When it comes to my notebook that I use around 25 hours per day I'm in a bit of a predicament. On the one hand it must be stable and ultra reliable. That means I don't install software on it I don't really need and resort to virtual machines to do such things. On the other hand, however, I also like new features of the OS which means I had to upgrade my Ubuntu 12.04 LTS to 14.04 LTS at some point. But how can that be done with minimal downtime and without running the risk of embarking on lengthy fixing sessions after the upgrade and potentially having to find workarounds for things that don't work anymore!?

When I recently upgraded from a 512 GB SSD to a 1 TB SSD and got rid of my Truecrypt partitions a few weeks ago I laid the foundation for just such a pain free OS update. The cornerstone was to have an OS partition that is separate from the data partition. This way, I was now able to quickly create a backup of the OS partition with Clonezilla and restore the backup to a spare hard drive in a spare computer. And thanks to Ubuntu, the clone of my OS partition runs perfectly even on different hardware. And quick in this case really means quick. While my OS partition has a size of 120 GB, only 15 GB is used so the backup takes around 12 minutes. In other words, the downtime of my notebook at this point for the upgrade was 12 minutes. Restoring the backup on the other PC took around 8 minutes.

On this separate PC I could then upgrade my cloned OS partition to Ubuntu 14.04, sort out small itches and ensure that everything is still working. As expected, a couple of things broke. My MoinMoin Wiki installation got a bit messed up in the process, Wi-Fi suspend/resume with my access point also got a bit bruised but everything else worked just as it should.

Once I was satisfied that everything was working as it should I used Clonezilla again to create a backup of the cloned OS partition and then restored this to my production notebook. Another 12 minute outage plus an additional 3 minutes to restore the boot loader with a "Boot Repair" USB stick as my older Clonezilla version could not restore a Ubuntu 14.04 Grub boot loader installation after the restore process.

And that's it, Ubuntu 14.04 is now up and running on my production PC with as little as two 12 minute outages. In addition, I could try everything at length before I committed the upgrade and I still have the backup of the 12.04 installation that I could restore in 12 minutes should the worst happen and I discover a showstopper down the road.

So was it worth all the hassle other than being able to boast that I have 14.04 up and running now? Yes I think it has and here's a list of things that I have significantly improved for my everyday use:

  • Video playpack is smoother now (no occasional vertial shear anymore)
  • The dock shows names of all LibreOffice Documents now
  • Newer Virtualbox, seems to be faster (graphics, windows, etc.)
  • MTP of more phones recognized
  • Can be booted with external monitor connected without issues
  • Nicer fonts in Wine Apps (Word, etc.)
  • Nicer animations/lock screen
  • Updated Libreoffice, improved .doc and .docx support
  • The 5 years support period starts from 2014
  • Better position to upgrade in 2 years to 16.04
  • Menus in header save space
  • VLC has more graphical elements now

Walking Down Memory Lane – 10 Years Ago, My First 3G Mobile

V800-1Is 10 years a long or a short timeframe? Depends, and when I think back to my first UMTS mobile that I bought 10 years ago on this day (I checked), the timeframe seems both long and short at the same time. It seems like eternity from an image quality point of view as is pretty much visible in the first picture on the left which is the first picture I took with my first UMTS phone, a Sony Ericsson V800 – Vodafone edition. Some of you might see another UMTS phone on the table, a Nokia 6630 which was a company phone so that doesn't count.

On the other hand, 10 years is not such a long time when you think about how far the mobile industry has come since. Back in 2004 I had trouble finding UMTS network coverage as mostly only bigger cities (population > 500.000 people perhaps) had 3G coverage at the time. Back in 2004, that first UMTS phone was still limited to 384 kbit/s, no HSDPA, no dual-carrier, just a plain DCH. But it was furiously fast for the time, the color display was so much better than anything I had before and the rotating camera in the hinge was a real design highlight. Today, 10 years later, there's almost nationwide 3G and even better LTE coverage, speeds in the double digit megabit/s range are common and screen size, UI speed, storage capacity and camera capabilities are orders of magnitude better than at that time.

Even more amazing is that at the time, people in 3GPP were already thinking about the next step. HSDPA was not yet deployed in 2004 but already standardized and meetings were already held to define the LTE we are using today. Just to get you in the mindset of 2004, here are two statements from the September 2004 "Long Term Evolution" meeting in Toronto Canada:

  • Bring your Wi-Fi cards
  • GSM is available in Toronto

In other words, built-in Wi-Fi connectivity in notebooks was not yet the norm and it was still not certain to get GSM coverage in places were 3GPP went. Note, it was GSM, not even UMTS…

I was certainly by no means a technology laggard at the time, so I can very well imagine that many delegates attending the Long Term Evolution meeting in 2004 still had a GSM-only device that could do voice and sms, but not much more. And still, they were laying the groundwork for LTE that was so far away from the reality at the time that it almost seems like a miracle.

3-generations-mobileI close for today with the second image on the left, that shows my first privately owned GSM phone from 1999, a Bosch 738, my first UMTS phone from 2004 and my first LTE phone, a Samsung Galaxy S4 from 2014 (again, I had LTE devices for/from work before but this is the first LTE device I bought for private use). 15 years of mobile development side by side.

Some Musings About LTE on Band 3 (1800 MHz)

It's 2014 and there is no doubt that LTE on Band 3 (1800 MHz) has become very successful and the Global mobile Supplier's Association (GSA) even states that "1800 MHz [is the] Prime Band for LTE Deployments Worldwide". When looking back 5 years to 2009/2010 when first network operators began deploying LTE networks, this was far from certain.

Quite the contrary, deploying LTE in 1800 MHz was seen by many I talked to at the time as a bit of a gamble. At the time, the general thinking, for example in Germany, was more focused on 800 MHz (band 20) and 2600 MHz (band 7) deployments. But as the GSA's statement shows, that the gamble has paid out. Range is said to be much better compared to band 7 so operators who went for this band in auctions or could re-farm it from spectrum they already had for GSM have an interesting advantage today over those who need to use the 2600 MHz band to increase their transmission speeds beyond the capabilities of their 10 MHz channels in the 800 MHz band.

To me, an interesting reminder that the future is far from predictable…

Smartphone Firmware Sizes Rival Those Of Desktop PCs Now

Here's the number game of the day: When I recently installed Ubuntu on a PC I noticed that the complete package that installs everything from the OS to the Office Suite has a size of 1.1 GB. When looking at firmware images of current smartphones I was quite surprised that the images are at least the same size or are even bigger!

If you want to see for yourself, search for "<smartphone name> stock firmware image" on the net and see for yourself. Incredible, there's as much software on mobile devices now as there is on PCs!

A lot of it must be crap- and bloatware, though, because Cyanogen firmware images have a size of around 250 MB. Add to that around 100 MB for a number of Google apps that need to be installed separately and you are still only at about a third of a manufacturer's stock firmware image size.