Who Is Interested In ‘Mobile’ and ‘Desktop’ Convergence Like I Want It?

For the last couple of years a number of companies have been trying to find a way to converge the operating system and user interfaces of mobile and desktop devices. Perhaps time is getting a bit scarce now as smartphone processors come pretty close to the computational power, memory size and graphics capabilities as full grown desktop PCs. Sure their screen and battery is smaller but at some point it will be trivial to interconnect them with a bigger screen, keyboard and mouse and ask them to be the 'desktop'. Perhaps we reach this point with tablets first? But what kind of operating system will run on it?

With almost the screen size of a small notebook the only things thet are missing in a tablet product we use today is a hardware keyboard and a mouse. Apple is getting closer and closer to this point with the latest Macbook 2015. Running a full iOS, it nevertheless is pretty much a tablet with a keyboard attached due to its thinness and use of only a single USB 3.1 connector. Unlike a tablet however, it runs full iOS. But the keyboard is attached to the screen and the graphical user interface is still geared towards keyboard and touchpad.

Microsoft is also on it's way with the Surface line of notebook / tablet hybrids, even though commercial success is nowhere to be seen yet. Their Surface notebooks / tablets are now also running a full Windows operating system on a tablet sized device with removable keyboard with an x86 processor, so that is perhaps even closer to a converged device than the Macbook desribed above. I don't like the Windows 8 style graphical user interface and closed source is not my piece of cake either but they are definitely innovating in this space.

The third player in the desktop/mobile space is Google with Android and Chromebooks. While I like the fact that Chrome OS runs on Linux, the idea that everything is in the Google cloud makes their vision of a combined mobile/desktop future not very appealing to me. I can imagine my data to be stored on my own cloud server at home but I'm not yet willing to give up the huge advantages of on-device data and application storage when it comes to speed, security and being able to get work done in places where Internet connectivity is not present or too slow.

So perhaps it's time now to get hold of a 'Surface' and install Ubuntu on it to see how usable the Unity graphical user interface or perhaps KDE or something else is on a tablet once keyboard and mouse are removed!?

Windows 10 And Wi-Fi Sense – Consider Your Network Compromised?

<sarcasm on> I'm sure lots of people are looking forward these days to upgrade to Windows 10 <sarcasm off>. Personally, the only reason why I'm even writing about it is the new Wi-Fi Sense feature that has jumped over from Windows Phone to the desktop to freely share Wi-Fi passwords with your contacts in Outlook, Hotmail, Skype and Facebook. Great for convenience, bad for security, bad for me.

Bad for me because in the past I let friends access my Wi-Fi network at home. I'm sure I haven't changed my Wi-Fi password since because it's a pain to change it on all of my devices that use the network, especially printers and other devices that don't offer an easy way to change the password. But I guess I have to do it now because Windows PCs of friends with whom I've shared my password in the past that upgrade can now freely share it with their friends, i.e. people I don't even know, with a simple tick in a check box. Obviously, Microsoft get's to see and store it, too.

Fortunately it's not as bad as it sounds at first as users still have to tick a check box per Wi-Fi network they want to share credentials for. Ars Technica has a detailed description of how the feature works if you are interested in the details. Still, I think it's time for a password change at home.

Yes, I know, giving out my Wi-Fi password was never secure to begin with. This is why I have a guest Wi-Fi SSID and password for quite some time now that I can quickly switch-on and off as required. Another benefit of this solution is that I can change the SSID and/or the password every now and then so things don't get out of hand. This way even if friends decide to or accidentally share my guest Wi-Fi credentials with Microsoft and the world it's of little use to anyone as the guest Wi-Fi SSID is automatically shut down after a pre-configured time after the last user has left.

And that by the way also limits the damage done by those automatic 'backups' of private data to Google Servers that Android based devices perform every now and then.

How To Pretect Against IPv6 Leakage in an IPv4 VPN Environment – Part 3 (Raspi VPN Gateway)

In the previous two parts on IPv6 leakage in IPv4-only VPN environments I’ve taken a look at how things can be fixed on the client side (part 1) and on the network side (part 2). While being at conferences and in hotels I often use a Raspberry Pi Wi-Fi VPN client gateway to connect all my Wi-Fi devices to the local network with a single sign-in. Once connected the Raspberry Pi then establishes a secure VPN connection that is then used by all my devices. In other words, the VPN tunnel is not established from my PC but from the gateway. The big question is, does IPv6 leakage occur here as well?

A gave it a quick try and everything is o.k. Per default, Raspian does not activate IPv6 at all. When activated manually (sudo modprobe ipv6) the Raspi will request an IPv6 address on the backhaul interface. If it gets one it doesn’t share it with the local Wi-Fi link to which all my devices are connected. In other words, no bridge is created, no IPv6 leakage can occur and any traffic to and from all local Wi-Fi devices pass through the VPN tunnel.

Good, one thing less to deal with…

The Martian – A Book Review

O.k. this is a bit out of the ordinary on this blog, but apart computers and networks I’m also interested in spaceflight. Over the weekend I’ve come come across “The Martian” by Andy Weir due to a recommendation of an online ebook store and found it so outstanding that I had to write a few words about it here…

The story is about an astronaut in the near future stranded on Mars after a mission abort has gone horribly wrong and about his quest to survive until he can be picked up. Finally a book about Mars again that is not about astronauts encountering hostile aliens that want to kill them. What stands out is not only the storyline with lots of turns, twists and surprises, but also how closely the story is weaved around NASA’s plans to send humans to Mars and the technology existing and under development. Also, the writing style and the protagonist’s character and humor make this book a page turner. I usually take my time reading books but I went through this in two days (i.e. nights) flat. I just couldn’t put it down, it’s an extraordinary piece of work. 10 thumbs up!

LTE Roaming With 1 Mbit/s – They Must Be Joking…

This week, another network operator in Germany announced that they will start LTE data roaming on 1 August. For two reasons, however, the announcement is more than a little bit strange.

First, they are not only a year late to the game but they will only start with three foreign networks while their competition already have roaming agreements with several dozen networks around the globe. The second restriction is even weirder: LTE Roaming data speeds are limited to 1 Mbit/s and nobody knows what that is good for!? While my LTE data roaming speeds easily surpass 20 Mbit/s they want to limit their customers to 1/20 of that. No way this would work for me because I regularly use LTE data roaming for notebook tethering.

Hm, that makes me wonder if they are bundling a couple of old E-1 lines to connect to their roaming hub and are afraid of massive overload if customers want to use the technology as it was designed!? 😉 Sorry, an announcement like that needs to be ridiculed.

How To Pretect Against IPv6 Leakage in an IPv4 VPN Environment – Part 2 (Server Side Changes)

While being excited about the availability of IPv6 from my mobile network operator of choice the disadvantage that comes along with it is IPv6 leakage when using Witopia’s VPN service with OpenVPN on Ubuntu. For some strange reason they answer IPv6 DNS AAAA requests even though their product only tunnels IPv4 packets. My OpenVPN server setup at home on a Raspberry Pi had the same behavior so far but as it is under my own control I started looking for ways to change that.

At first it looked as if it would be a straight forward thing to implement. But the first look was a bit deceiving and in the end it took a bit of more tinkering before the DNS server queried through the VPN tunnel only returned DNS responses for IPv4 A-requests and empty results for IPv6 AAAA-requests.

The OpenVPN Server setup I have linked to above relies on a DNS server already present in the server’s network to also answer queries from remote OpenVPN clients. As there was no way for me to change that DNS server’s behavior I had to setup a separate DNS server on the OpenVPN Server Raspi and then reconfigure OpenVPN to point clients to that DNS server.

Bind Comes To The Rescue

On Linux, “bind” is one of the most popular DNS server. While it is probably overkill to use “bind” just as a DNS request forwarder it does offer a nice option to return an empty result for IPv6 AAAA-requests when sent over IPv4. Here’s some more background information on the option. For this option to work, bind needs to be compiled with a special flag to recognize the “filter-aaaa-on-v4 yes;” configuration option. Unfortunately, bind on Raspian does not come configured with it so I had to compile bind from scratch. That sounds more difficult than it is, however.

But perhaps your distribution set the correct flag before bind was compiled so my advice is to install bind from the repositories first and see if it works with the “filter-aaaa-on-v4” option. If it doesn’t it can be uninstalled before downloading and compiling bind from it’s source. Also, it has the benefit that all configuration files are already in place which are perhaps not put into the right directories when compiling from source.

Installing Bind From The Repositories

Installing bind from the repositories works with a simple “sudo apt-get upate && sudo apt-get install bind9” command. Afterward, uncomment and modify the following section in “/etc/bind/named.conf.options” to enable DNS query forwarding to the upstream DNS server used in the network:

    forwarders {
         8.8.8.8;
    };

Once done, restart bind to see if the configuration change has been accepted: “sudo service bind9 restart“. If no error messages are shown on the console, bind is up and running.

Check If the IPv6 Option Works

In the next step add the IPvt6 filter option to the same configuration file and also allow queries from other networks by inserting the two additional lines marked in orange below. This is necessary as the OpenVPN clients get IPv4 address from a subnet that is different from the subnet that the OpenVPN Server uses on the backhaul link:

options {
    directory “/var/cache/bind”;

    //…

    forwarders {
         8.8.8.8;
    };

    //…

    dnssec-validation auto;

    auth-nxdomain no;    # conform to RFC1035
    listen-on-v6 { any; };

    filter-aaaa-on-v4 yes;
    allow-query {any;};
};

Once done, do another restart of bind. If an error message is shown that the filter option is not supported some extra work is required. Otherwise, you are almost good to go and the only thing that is required for OpenVPN clients to query this DNS server instead of the previous one is to change the DNS option in the OpenVPN config file as shown at the end of this post.

Compile and Install Bind From Source

Before proceeding, uninstall bind again with “apt-get remove bind9“. While this removes the binaries, it leaves the configuration files including the one we have modified in place. Now download and install bind with the following commands as described here. As there might be a more up to date version of bind at the time you read this it might be worthwhile to modify the version number in the commands accordingly.

apt-get install build-essential openssl libssl-dev libdb5.1-dev
mkdir bind-install
cd bind-install
wget ftp://ftp.isc.org/isc/bind9/9.9.7/bind-9.9.7.tar.gz
tar zxvf bind-9.9.7.tar.gz

fakeroot ./configure –prefix=/usr –mandir=/usr/share/man –infodir=/usr/share/info –sysconfdir=/etc/bind –localstatedir=/var –enable-threads –enable-largefile –with-libtool –enable-shared –enable-static –with-openssl=/usr  –with-gnu-ld –with-dlz-postgres=no –with-dlz-mysql=no –with-dlz-bdb=yes –with-dlz-filesystem=yes  –with-dlz-stub=yes  CFLAGS=-fno-strict-aliasing –enable-rrl –enable-newstats –enable-filter-aaaa

make install

Some patience is required as the process takes around 45 minutes. But once done everything is ready and an “/etc/init.d bind9 start” will start the service, this time with the ipv6 filter in place as the configuration file we modified further above is still in place.

OpenVPN modification

The last step now is to tell the OpenVPN server to point new clients to this DNS server. This is done by modifying the push “dhcp-option DNS x.x.x.x” option in “/etc/openvpn/server.conf” file, with x.x.x.x being the IP address of the Raspberry Pi. A “sudo service openvpn restart” activates the change.

Verifying That Everything Works

The next time an OpenVPN client device connects to the server, all DNS requests for AAAA records are getting an empty response. This can be verified e.g. by typing in “dig youtube.com AAAA” which should return an empty result and not an IPv6 address. Another option is using Wireshark for the purpose.

And that solves my OpenVPN IPv6 leakage issue without any modification on the client side!

How To Protect Against IPv6 Leakage in an IPv4 VPN Environment – Part 1

Last year I had a post that one has to be careful about establishing an IPv4-only VPN tunnel over a network interface that has a public IPv4 and a public IPv6 addresses assigned to it. If the DNS server on the other side of the VPN tunnel returns IPv6 addresses and the network stack on the client side prefers IPv6, which is usually the case, then the connection establishment will not go through the VPN tunnel but right around it via the physical network interface.

Quickly said at the time and quickly forgotten again as well as IPv6 connectivity is still rare these days. But those days are over as my mobile network operator of choice now supports IPv4v6 connectivity. When tethering my notebook via my smartphone now it configures itself for IPv4 and IPv6. That also means that I immediately get unwanted IPv6 leakage while using my VPN.

Some Mac and PC VPN client software used by some VPN providers seem to have built-in protection against it. On my Ubuntu systems, however, the OpenVPN client unfortunately does not. The only way to fix this on the client side is to disable IPv6 permanently or temporarily.

Ipv6-local-configAs I’d like to use IPv6 in general, I don’t want to disable it permanently. A temporary alternative for Ethernet and Wi-Fi connections is to to restrict IPv6 to link-local use as shown in the screenshot on the left. The problem is, however, that a new Wi-Fi connection that one creates e.g. at a hotel or exhibition venue will have full IPv6 enabled again and it’s more than likely that one forgets to turn it off manually after initial connection establishment.

But why do DNS servers on the other side of an IPv4-only VPN actually have to return IPv6 addresses? I use Witopia for some scenarios and their DNS servers happily return IPv6 addresses. I wish they wouldn’t and it makes me wonder why they are doing it when their VPN service is limited to IPv4 anyway!?

Fortunately, I use my private VPN servers for most of my VPN needs. They also return IPv6 addresses but here I can change the behavior of the DNS servers behind the VPN server to only return IPv4 DNS results. As configuring that was a bit tricky I’ll make a separate blog post out of that. So stay tuned if you are interested!

How To Configure OpenVPN So I Can Return to LTE

On my commute to work I make good use of the excellent network coverage along the railway track. LTE coverage is almost perfect, but just almost as there are a few locations where my data session is handed over to the 3G network. Once on 3G the only way back to LTE, at least for now, is for the network to set the device into Cell-PCH or Idle state so it can search for LTE and return autonomously. That unfortunately doesn't happen in my case as my OpenVPN Server sends a UDP keep-alive packets every 10 seconds, thus preventing my smartphone I use for tethering to return to LTE. It's not that big of a deal as 3G is quite fast as well so I hardly notice the difference. But I'm a perfectionist… So I had a closer look at the OpenVPN sever configuration (in /etc/openvpn/server.conf) and noticed an option for keepalive timers:

keepalive 10 120

The "10" suspiciously looked like the 10 seconds interval that keeps my 3G connection in Cell-DCH state. After changing the line to

keepalive 30 120

the UDP keepalive packets are now spaced 30 seconds apart. That's more than enough time for the network now to set my device to Cell-PCH or Idle state, which in my case, happens after around 12 seconds of inactivity. Shortly after, my tethering smartphone then changes back to LTE.

Perfect! And on top of all this I might even save some battery power as fewer packets are sent and received now.

 

What has Changed In Mobile Computing Since 2009?

2008-2015In a previous post I wrote about what has changed in desktop computing in the last 6 years. In summary not very much, I still use my notebook from back then for some purposes with an up to date operating system for multimedia consumption.  So what about mobile computing and mobile devices, how have things evolved in this domain in the same time frame?

Back in 2008 I wrote a review of how well an entry level phone at the time, a Nokia 5000, could be used for email and web browsing. Back then, the point was to show that even with an entry level device, it had become possible to surf the web and use the device for email communication. It was a sensation. So let's have a look at how the 7 year old Nokia 5000 compares to a similar device that can be bought today.

Price

For my comparison I chose the Android based LG D160, released back in 2014 and which is currently available for around 56 euros, contract free, VAT included. That is only around 60% of the price I paid for the Nokia 5000 at the time, which cost 90 euros. I could have made a comparison to a device that also costs 90 euros today but I wanted to compare two entry level devices and the cost of such a device has come down significantly over the years.

Connectivity

At the time, being able to browse the web with an entry level device was spectacular, today it's a given, nobody would think otherwise anymore. Back then I used Opera Mini with a compression server in the cloud to reduce the size and complexity of the web page. This was necessary on the one hand because the Nokia 5000 only came with a 2G EDGE network interface that could at best transport around 250 kbit of data per second. 3G networks did exist at the time and already covered bigger cities, but entry level devices were still limited to 2G networks. Compression was also necessary due to the processing power and memory having been quite low on the Nokia 5000 compared to today's devices. The LG D160 of 2014 on the other hand comes equipped with a 3G HSPA network interface with data transfer speeds of up to 14.4 Mbit/s. LTE networks are available nationwide today but it's the same story as with 3G for the Nokia 5000 then, LTE hasn't moved down into entry level category yet. What is included today that was considered a high end feature at the time is Wi-Fi, so the device can be used at home without a cellular network. Also, the device supports tethering, so it can be used as a gateway to the Internet for a notebook or tablet on the move.

Screen and Web Browsing

The image on the left shows the Nokia 5000 and the LG D160 side by side and next to a Samsung Galaxy S4, a flagship device back in 2013. While the Nokia 5000 back in 2008 came with a 320×240 pixel screen capable of 65k colors, the LG D160 now has a 320×480 pixel screen with 16 million colors. By today's standards that is a very low resolution but compared to 2008 it is still twice the number of pixels. Opera is still my browser of choice but I have moved-on from Opera Mini to Opera, a full web browser that no longer requires a compression server on the backend as the device has enough RAM and processing power to show mobile optimized and even full web pages without any magic applied in between. At the time it took around 12 seconds to launch the browser and there was no multitasking. Still acceptable then but today, the browser launches in 4 seconds and even stays in memory if no other big apps are running despite the 512 MB RAM, which is a massive amount compared to 2009, but rather little today. GSMArena doesn't even specify how much RAM was built into the Nokia 5000 but the 12 MB of Flash memory for file storage compared to the 4 GB in the D160 today are a pretty good indication of what it must have been. Another aspect I focused on at the time was how fast and smooth scrolling and I noted that compared to the flagship Nokia N95 at the time it was slower and not as smooth. Still usable was the verdict. Today, scrolling of normal web pages via a touchscreen is quite smooth on the D160 and light-years away from what was possible on entry level devices in 2008/9.

eMail

At the time, the email client in the Nokia 5000 was quite rudimentary, with important options such as partial downloads missing. Also, there were few if any email apps for non-smartphone devices at the time to improve the situation. Today, even the 40% cheaper D160 easily runs sophisticated email clients such as K9 mail that, apart from a proper PGP implementation, leaves little to be desired.

Camera, Navigation and Apps

When it comes to built-in cameras, the Nokia 5000 from back in 2009 has a 1 MP camera at the back while today's D160 has a 3 MP camera built in. Both take pictures but they would both be rated pretty much worthless by the standards of each period. But still, the camera is significantly better at a much reduced price compared to 2009. One big advantage of today's entry level smartphones compared to 2009 is the built in GPS chip for a variety of uses from finding the closest Italian restaurant to car navigation. I didn't install Osmand on the D160 but Google maps pinpointed my location in seconds and presented me with a route to a destination almost instantly. An incredible improvement over the state of the art in 2009 in this price category. I mentioned the price tag on purpose as Nokia Maps with car navigation existed in 2008/9 (see here and here) but could only be used on much more expensive Symbian OS based devices. And a final point to make in this review is the availability of apps now and then. Few apps and games existed for entry level devices back then. Today, even the very low cost D160 can handle most Android apps and many if not most games (I'm no expert when it comes to gaming). Also, SMS messaging is quickly dissapearing with most people not caring about privacy and using Internet based multimedia replacement solutions such as WhatsApp instead.

Summary

So while I still use the notebook I bought back in 2009 with the latest operating system version on the market today, the entry level phone from back then is so outdated by today's entry level state of the art that I find quite shocking. Incredible how things have advanced in mobile in this short amount of time.

What Has Changed In Desktop Computing Since 2009?

When I recently checked out a "very low end" smartphone of 2015 I couldn't help noticing how vastly different and improved things are compared to smartphones sold a couple of years ago. I'll write a follow up article about this but I think the scene should be set first with a comparison: What happened in desktop/laptop computing since 2009?

I chose 2009 for this post as this was the year I bought a 17" laptop mainly for stationary use to replace an aging tower PC. Since my usage became more mobile since then I had to replace this laptop for everyday use with a smaller device in the meantime. Nevertheless I still use that laptop today, 6 years later (!), for streaming Netflix, Youtube and other things. So while I still use this 6 year old computer any phone from that era has long gone to digital oblivion.

So is that 6 year old laptop old and outdated? I guess that depends on how you look at it. At the time I bought the laptop for 555 euros with an Intel Core 2 Duo processor, 4 GB of RAM, a 256 GB hard disk, USB 2, a 17" display and Windows Vista. Even if I hadn't upgraded the machine, Windows Vista pretty much looks like Windows 7 which is still widely used today. I could even upgrade the machine to Windows 8 or Windows 10, to be shipped in a few weeks from now and it would still run well on a 4 GB machine. As a matter of fact, many low end laptops sold today still come equipped with 4 GB of RAM. Hard disk sizes have increased a bit since then, USB 3 ports are now standard, CPUs are perhaps twice as powerful now (see here and here) and the graphics capabilities required for gaming are more advanced. But for my (non-gaming) purposes I don't feel a lot of difference.

As I switched to Linux in the meantime my software evolution path was different. Windows was banished from the machine at some point and replaced by Ubuntu Linux. Ubuntu's graphical user interface looked different in 2009, a lot of eye candy has been added since then. Today I run Ubuntu 15.05 on the machine and I upgraded to a 256 MB SSD which makes it in effect look no different from my almost up to date notebook. It also still behaves pretty much the same when it comes to program startup and reaction times. The major difference is that the fan is louder compared to my current laptop due to the still higher power requirements of laptops of the 2009 time frame compared to today's machines.

So what has changed since 2009 in the laptop world? Prices have certainly come down since then a bit and many people these days buy laptops in the €300 to €400 range (taxes included). Technical specs have improved a bit but the look and feel is pretty much the same. Companies have started experimenting with touch screens and removable displays to create a more "tablet-like" experience, trying to import some of the fascinating advances that have happened elsewhere since. But that's still a niche at best. In other words, hardware and software evolution on the desktop have very much slowed down compared to the 1990's which was the second half of the home computer area and the decade of the rise of the PC and Windows. Things already slowed in the 2000's but that decade still saw the rise of easier to use Windows and prices for laptops coming down significantly.

Now try to remember what kind of mobile phone or smartphone you had in 2009 and compare that to what you have today and you'll see a remarkable difference to the story above. More about that in a follow up post.