You Can’t Hide Your Location From Google With A VPN

Observable-wifis-smHere's an interesting observation I recently made when I used a VPN in a hotel and came across a website that asked for my location details in the browser. I was confident Firefox would not be able to find out where I was as I used a VPN tunnel to my gateway in Paris. I thus pressed the 'yes' button, expecting that the website would then tell me that I'm in Paris. Much to my surprise, however, it came up with my exact location. How is that possible, I thought, my IP address points to my VPN server in Paris!?

A detailed answer can be found on Firefox's Geolocation info web page here. In addition to the IP address, Firefox also gets the list of nearby Wi-Fi access points and sends that to Google's location server. At the location there were only two Wi-Fi access points in addition to my own as shown in the screenshot on the left but that's enough for Google to locate me.

Incredible on the one hand and scary on the other. It's no problem in this case as Firefox asked me for permission before sending the data to Google and the web page. But it shows how easy 'others' can pinpoint your location if they manage to get a little piece of software on any connected device you carry that has a Wi-Fi interface.

Socks and (Raspberry) Pis for Christmas

I like personal gifts for Christmas and very much appreciate self knitted socks and other self-made things. Personally, I have to admit that handcraft is not a strength of mine so I have to resort to other things. This year I think I might have the perfect personal gift, however! I can't knit socks and pullovers but I've decided to put an BananaPi based Owncloud server together for the family and configure their smartphones to talk to that server instead of Google. That should be the equivalent of least three pairs of hand made socks 🙂

Digging Is The Expensive Part – Not The Fiber

Back in the early 1980s, telecommunication was a state monopoly in pretty much all countries all over the world. Privatization in the 1990's and the resulting competition gave an incredible boost to the industry. Today we enjoy incredibly fast networks in many places, both fixed and wireless, and there is no sign that the increase in bandwidth requirements is slowing down anytime soon. We have come to a point, however, where the last mile infrastructure we have used in the last 25 years has come to its limits. Further evolution, both fixed and wireless, requires fiber links that do not only reach up to the buildings but right into the homes. The problem is, who's going to pay for it and what impact does it have on competition?

As I've ranted previously, the company that puts a fiber into peoples homes will become the telecom monopolist of the future. So while in some countries such as France, telecom companies are rushing to put fiber into the ground to be the first, companies in other countries like Germany are lacking behind. And even in France, fiber lines are mostly installed in densely populated areas, leaving more rural areas again at a disadvantage. The reason obviously is that it is expensive to put new fiber cables into homes. The point however, is, that it's not the fiber that is expensive, it's digging the trenches and the in-house installation that is required for the new connection. But why should the telecoms companies actually have to pay for the digging?

Let's have a look at roads (for cars) for example. These are built by the state, the country or the city with taxpayer money. It's critical infrastructure and so it makes sense. Telecommunication networks are also a critical infrastructure used by everyone and I guess we all agree we don't want to go back to state monopolies in this area. But how about using taxpayer's money to do the digging and put in empty tubes through which telecoms companies can then lay their fiber cables? This would give a huge boost to the digital economy and at the same time it would restore a degree of competition as it would perhaps suddenly make economical sense again to lay several fibers to a building and give people a choice again which infrastructure they want to use.

I know, I'm dreaming as this is a political decision that has not been made so far and I don't see any indication of something like that happening in the future. But one can still dream…

 

 

Upgrading Ubuntu With Minimal Downtime And A Fallback Option

When it comes to my notebook that I use around 25 hours per day I'm in a bit of a predicament. On the one hand it must be stable and ultra reliable. That means I don't install software on it I don't really need and resort to virtual machines to do such things. On the other hand, however, I also like new features of the OS which means I had to upgrade my Ubuntu 12.04 LTS to 14.04 LTS at some point. But how can that be done with minimal downtime and without running the risk of embarking on lengthy fixing sessions after the upgrade and potentially having to find workarounds for things that don't work anymore!?

When I recently upgraded from a 512 GB SSD to a 1 TB SSD and got rid of my Truecrypt partitions a few weeks ago I laid the foundation for just such a pain free OS update. The cornerstone was to have an OS partition that is separate from the data partition. This way, I was now able to quickly create a backup of the OS partition with Clonezilla and restore the backup to a spare hard drive in a spare computer. And thanks to Ubuntu, the clone of my OS partition runs perfectly even on different hardware. And quick in this case really means quick. While my OS partition has a size of 120 GB, only 15 GB is used so the backup takes around 12 minutes. In other words, the downtime of my notebook at this point for the upgrade was 12 minutes. Restoring the backup on the other PC took around 8 minutes.

On this separate PC I could then upgrade my cloned OS partition to Ubuntu 14.04, sort out small itches and ensure that everything is still working. As expected, a couple of things broke. My MoinMoin Wiki installation got a bit messed up in the process, Wi-Fi suspend/resume with my access point also got a bit bruised but everything else worked just as it should.

Once I was satisfied that everything was working as it should I used Clonezilla again to create a backup of the cloned OS partition and then restored this to my production notebook. Another 12 minute outage plus an additional 3 minutes to restore the boot loader with a "Boot Repair" USB stick as my older Clonezilla version could not restore a Ubuntu 14.04 Grub boot loader installation after the restore process.

And that's it, Ubuntu 14.04 is now up and running on my production PC with as little as two 12 minute outages. In addition, I could try everything at length before I committed the upgrade and I still have the backup of the 12.04 installation that I could restore in 12 minutes should the worst happen and I discover a showstopper down the road.

So was it worth all the hassle other than being able to boast that I have 14.04 up and running now? Yes I think it has and here's a list of things that I have significantly improved for my everyday use:

  • Video playpack is smoother now (no occasional vertial shear anymore)
  • The dock shows names of all LibreOffice Documents now
  • Newer Virtualbox, seems to be faster (graphics, windows, etc.)
  • MTP of more phones recognized
  • Can be booted with external monitor connected without issues
  • Nicer fonts in Wine Apps (Word, etc.)
  • Nicer animations/lock screen
  • Updated Libreoffice, improved .doc and .docx support
  • The 5 years support period starts from 2014
  • Better position to upgrade in 2 years to 16.04
  • Menus in header save space
  • VLC has more graphical elements now

Walking Down Memory Lane – 10 Years Ago, My First 3G Mobile

V800-1Is 10 years a long or a short timeframe? Depends, and when I think back to my first UMTS mobile that I bought 10 years ago on this day (I checked), the timeframe seems both long and short at the same time. It seems like eternity from an image quality point of view as is pretty much visible in the first picture on the left which is the first picture I took with my first UMTS phone, a Sony Ericsson V800 – Vodafone edition. Some of you might see another UMTS phone on the table, a Nokia 6630 which was a company phone so that doesn't count.

On the other hand, 10 years is not such a long time when you think about how far the mobile industry has come since. Back in 2004 I had trouble finding UMTS network coverage as mostly only bigger cities (population > 500.000 people perhaps) had 3G coverage at the time. Back in 2004, that first UMTS phone was still limited to 384 kbit/s, no HSDPA, no dual-carrier, just a plain DCH. But it was furiously fast for the time, the color display was so much better than anything I had before and the rotating camera in the hinge was a real design highlight. Today, 10 years later, there's almost nationwide 3G and even better LTE coverage, speeds in the double digit megabit/s range are common and screen size, UI speed, storage capacity and camera capabilities are orders of magnitude better than at that time.

Even more amazing is that at the time, people in 3GPP were already thinking about the next step. HSDPA was not yet deployed in 2004 but already standardized and meetings were already held to define the LTE we are using today. Just to get you in the mindset of 2004, here are two statements from the September 2004 "Long Term Evolution" meeting in Toronto Canada:

  • Bring your Wi-Fi cards
  • GSM is available in Toronto

In other words, built-in Wi-Fi connectivity in notebooks was not yet the norm and it was still not certain to get GSM coverage in places were 3GPP went. Note, it was GSM, not even UMTS…

I was certainly by no means a technology laggard at the time, so I can very well imagine that many delegates attending the Long Term Evolution meeting in 2004 still had a GSM-only device that could do voice and sms, but not much more. And still, they were laying the groundwork for LTE that was so far away from the reality at the time that it almost seems like a miracle.

3-generations-mobileI close for today with the second image on the left, that shows my first privately owned GSM phone from 1999, a Bosch 738, my first UMTS phone from 2004 and my first LTE phone, a Samsung Galaxy S4 from 2014 (again, I had LTE devices for/from work before but this is the first LTE device I bought for private use). 15 years of mobile development side by side.

Some Musings About LTE on Band 3 (1800 MHz)

It's 2014 and there is no doubt that LTE on Band 3 (1800 MHz) has become very successful and the Global mobile Supplier's Association (GSA) even states that "1800 MHz [is the] Prime Band for LTE Deployments Worldwide". When looking back 5 years to 2009/2010 when first network operators began deploying LTE networks, this was far from certain.

Quite the contrary, deploying LTE in 1800 MHz was seen by many I talked to at the time as a bit of a gamble. At the time, the general thinking, for example in Germany, was more focused on 800 MHz (band 20) and 2600 MHz (band 7) deployments. But as the GSA's statement shows, that the gamble has paid out. Range is said to be much better compared to band 7 so operators who went for this band in auctions or could re-farm it from spectrum they already had for GSM have an interesting advantage today over those who need to use the 2600 MHz band to increase their transmission speeds beyond the capabilities of their 10 MHz channels in the 800 MHz band.

To me, an interesting reminder that the future is far from predictable…

Smartphone Firmware Sizes Rival Those Of Desktop PCs Now

Here's the number game of the day: When I recently installed Ubuntu on a PC I noticed that the complete package that installs everything from the OS to the Office Suite has a size of 1.1 GB. When looking at firmware images of current smartphones I was quite surprised that the images are at least the same size or are even bigger!

If you want to see for yourself, search for "<smartphone name> stock firmware image" on the net and see for yourself. Incredible, there's as much software on mobile devices now as there is on PCs!

A lot of it must be crap- and bloatware, though, because Cyanogen firmware images have a size of around 250 MB. Add to that around 100 MB for a number of Google apps that need to be installed separately and you are still only at about a third of a manufacturer's stock firmware image size.

Check The Hotel’s Wi-Fi Speed Before Reserving

Whenever I make a hotel reservation these days I can't help but wondering how good their Wi-Fi actually is or if it works at all. Most of the time I don't care because I can use my mobile data allowance anywhere in Europe these days. Outside of Europe, however, it's a different story as it's more expensive so I still do care. Recently I came across HotelWifiTest, a website that focuses on data rates of hotel Wi-Fis based on hotel guests using the site's speed tester. Sounds like an interesting concept and it's promised good speeds for the next hotel I'm going to visit. So let's see…

A Capacity Comparison between LTE-Advanced CA and UMTS In Operational Networks Today

With LTE-Advanced Carrier Aggregation being deployed in 2014 it recently struck me that there's a big difference in deployed capacity between LTE and UMTS now. Most network operators have had two 5 MHz carriers deployed for quite a number of years now in busy areas. In some countries, some carriers have more spectrum and have thus deployed three 5 MHz carriers. I'd say that's rather the exception, though. On the LTE side, carriers with enough spectrum have deployed two 20 MHz carriers in busy areas and can easily extend that with additional spectrum in their possession as required. That's also a bit of an exception and I estimate that most carriers have deployed between 10 and 30 MHz today. In other words it's 15 MHz UMTS compared to 40 MHz LTE. Quite a difference and the gap is widening.

Pushing My VPN Gateway Speed to 20 Mbit/s With A BananaPi

20Mbit-s-speedTo secure my fixed and mobile data transfers I've been using OpenVPN for many years know. With fixed and mobile networks becoming faster I have to continuously improve my setup as well to make maximum use of the available speed at the access. At the moment, my limit at the server side is 30 Mbit/s while at the access side, my Wi-Fi to VPN Gateway's limit is 10 Mbit/s. Time to change that.

A quick recap of what happened so far: Earlier this year I moved from an OpenVPN server on an OpenWRT Wi-Fi Router to an OpenVPN Server running on a Raspberry. At the time my VDSL uplink of 5 Mbit/s was the limit. With that limit removed the next limit was the processing capacity of the RaspberryPi which limited the tunnel to 10 Mbit/s. The logical next step was to move to a BananaPi who's limit with OpenVPN is around 30 Mbit/s.

In many cases I was still limited to 10 Mbit/s, however, as I was using a Raspberry Pi as a Wi-Fi / VPN Client Gateway to tunnel the data traffic of many Wi-Fi devices through a single tunnel. For details see this blog entry and the Wiki and Code for this project on Github. To move beyond the 10 Mbit/s, I had to upgrade the hardware on this side to a BananaPi as well. The process is almost straight forward because I run Lubuntu 14.04 on the BananaPi which, like Raspian running on the BananaPi, is based on Debian Wheezy. With a few adaptations the script I put together for the RasbperryPi also runs on the BananaPi and converts it to an OpenVPN client gateway in a couple of minutes.

While I expected to see a throughput of 30 Mbit/s, the link between the two BananaPi levels out at 'only' around 20 Mbit/s as shown in the screenshot on the left. I haven't yet found out why this is the case as on both devices, processor load is around 65%, so there are ample reserves left to go faster. For the moment I ran out of ideas what it could be. However, doubling the speed with this step is not too bad either.