First Network Operator Disables MMS Delivery Due to Stagefright

Several news portals report today (see here, here and here) that the first network operator has disabled MMS delivery temporarily due to most Android mobiles in the wild today being vulnerable to the Stagefright that can be exploited by sending videos by MMS and other means. Instead, an SMS is sent to customers with a link to view the MMS contents in a browser. Quite a responsible step and I wonder if or how many network operators will follow the example.

This announcement is quite something! I have never heard that a network operator has disabled one of its own services because of a mobile device vulnerability. As the articles linked-to above point out, it's no secret in the mobile industry that Whatsapp and other services are now much preferred over expensive MMS messaging, so perhaps little revenue is lost by this step.

In the announcement the carrier said that this is only a temporary solution until a fix has been found. That makes me wonder how that fix could look like and how long this temporary solution remains in place!? It's unlikely that the threat will go away anytime soon as it will take quite some time for devices to get patched as an Android OS patch is required. That means the update can't be delivered via Google's app store but needs to be incorporated by device manufacturers in their builds for each device. Good luck with that. Also, I guess there will be many devices that will never get updated as device manufacturers have already lost interest in providing OS updates for devices that are somewhat older.

Another solution I can imagine would be to put a "virus scanner" in place on the MMS server in the network to filter out malicious videos. But that will cost time and money not only initially but also to keep the signatures up to date. That makes me wonder if the service still makes enough money to justify such a measure!? On this account I wouldn't be surprised if Facebook, Google and others are already scrambling to put scanners in place to make sure videos that are put on their services by users do not contains malicious content.

No matter how I look at it, I can't help but feel that we've just reached a tipping point when it comes to mobile security. Google and device manufacturers need to do something radical and drastic NOW to make sure that future Android devices can be patched in a timely manner (i.e. in a matter of hours just like what is possible for desktop operating systems) rather than having to wait for device manufacturers to come up with new builds or, even worse, not being able to patch Android devices at all anymore due to lack of manufacturer support.

Why Don’t We Still Have A SIMPLE Smartphone Remote Help Function?

One thing I've been hoping for years now is for someone to come up with an easy to install remote smartphone help function to see and control the screen remotely. On the desktop VNC has existed in various flavors almost since the beginning of desktop GUIs. On all mobile operating systems, even on the open source Android, there is no good and simple solution. Yes, there are some solutions that have been started and abandoned again like the droid VNC server project but nothing that just works out of the box and across different devices and Android versions. I have to admit I'm a bit frustrated because I could use a remote help function at least once a week.

A couple of years ago I was extremely thankful to Google for bringing Wi-Fi tethering to the masses when it was still perceived as something "terrible" by most network operators. But for a remote screen implementation I personally can't count on Google because I'm sure that should they come up with such a thing they would put one of their servers between the smartphone and the helping hand. No thanks, I need something more direct, private and secure. But Android is open and I would accept something that requires a rooted phone and I can't but wonder why nobody has come up with a good, simple and interoperable solution so far!?

P.S. Cyanogen, hello!?

How To Simulate an IP-Adress Change On A NAT WAN Interface

The vast majority of Internet traffic is still over IPv4 and when I look at all the different kinds of connectivity I use for connecting my PC and smartphone to the Internet, there isn't a single scenario in which I have a public IPv4 address for those devices. Instead, I am behind at least one Network Address Translation (NAT) gateway, that translates a private IPv4 address into another IPv4 address, either a public IPv4 or another private one in case several NATs are cascaded. While that usually works well, every now and then one of those NATs change their IP address on their WAN interface which creates trouble in some use cases.

Applications that use the TCP transport protocol quickly notice this as their TCP link gets broken by the process. Higher layers are notified that the TCP link is broken and apps can reestablish communication to their counterpart. Apps using UDP as a transport protocol, however, have a somewhat harder time. UDP keep-alive packets sent in one direction to keep NAT translations in place are not enough as they just end up in nirvana. Bi-directional UDP keep alive packets also end up in nirvana without the application on top ever being notified about it. The only chance they have is to implement a periodic keep-alive 'ping' and a timeout after which the connection is to be considered broken.

Recently I had to simulate such behavior and wondered how to best do that. Again a Raspberry Pi acting as a Wi-Fi access point, NAT and Ethernet backhaul served as a great tool. Three shell commands are enough to simulate an IP-address change on the WAN-interface:

Do a 'sudo nano /etc/dhcp/dhclient.conf' and insert the following line:

    send dhcp-requested-address 10.10.6.92;

The IP address needs to be replaced with an IP-address of the subnet of the WAN interface but different from the one currently used. Obviously that IP address must not be used by any other device on the WAN subnet which can be checked by pinging the IP address before changing the configuration file.

Next, disable the dhcp client on the WAN interface with

   dhclient -r -v

The interface itself remains up so Wireshark does not stop an ongoing trace but IP-connectivity is removed. If the scenario to be simulated requires a certain time before a new IP address is available, just pause before inputting the next command.

A new IP address can then be requested with

   dhclient -v eth0

The result returned by the command shows if the requested IP has been granted or if the DHCP server has assigned the previous IP address again or perhaps even an entirely different one. If the DHCP server has assigned the old IP address, changing the MAC address after disabling the dhcp client will probably help.

And that's all there is to it except for one 'little' thing: If the Raspberry Pi or other Linux device you perform these actions on are yet again behind another NAT, the server on the Internet will not see an IP address change but just a different TCP or UDP port for an incoming packet. So while in some scenarios the connection will still break there are potentially scenarios in which a connection can survive. One example is an ongoing conversation over UDP. If the NATs assign the same UDP port translations on all interfaces again, which can but does not necessarily have to happen and the first UDP packet is sent from the NATed network, the connection might just survive. I can imagine some TCP survival scenarios as well so don't assume but check carefully if the exercise produces the expected changes in IP address and port nubmers for your network setup.

Have fun!

Who Is Interested In ‘Mobile’ and ‘Desktop’ Convergence Like I Want It?

For the last couple of years a number of companies have been trying to find a way to converge the operating system and user interfaces of mobile and desktop devices. Perhaps time is getting a bit scarce now as smartphone processors come pretty close to the computational power, memory size and graphics capabilities as full grown desktop PCs. Sure their screen and battery is smaller but at some point it will be trivial to interconnect them with a bigger screen, keyboard and mouse and ask them to be the 'desktop'. Perhaps we reach this point with tablets first? But what kind of operating system will run on it?

With almost the screen size of a small notebook the only things thet are missing in a tablet product we use today is a hardware keyboard and a mouse. Apple is getting closer and closer to this point with the latest Macbook 2015. Running a full iOS, it nevertheless is pretty much a tablet with a keyboard attached due to its thinness and use of only a single USB 3.1 connector. Unlike a tablet however, it runs full iOS. But the keyboard is attached to the screen and the graphical user interface is still geared towards keyboard and touchpad.

Microsoft is also on it's way with the Surface line of notebook / tablet hybrids, even though commercial success is nowhere to be seen yet. Their Surface notebooks / tablets are now also running a full Windows operating system on a tablet sized device with removable keyboard with an x86 processor, so that is perhaps even closer to a converged device than the Macbook desribed above. I don't like the Windows 8 style graphical user interface and closed source is not my piece of cake either but they are definitely innovating in this space.

The third player in the desktop/mobile space is Google with Android and Chromebooks. While I like the fact that Chrome OS runs on Linux, the idea that everything is in the Google cloud makes their vision of a combined mobile/desktop future not very appealing to me. I can imagine my data to be stored on my own cloud server at home but I'm not yet willing to give up the huge advantages of on-device data and application storage when it comes to speed, security and being able to get work done in places where Internet connectivity is not present or too slow.

So perhaps it's time now to get hold of a 'Surface' and install Ubuntu on it to see how usable the Unity graphical user interface or perhaps KDE or something else is on a tablet once keyboard and mouse are removed!?

Windows 10 And Wi-Fi Sense – Consider Your Network Compromised?

<sarcasm on> I'm sure lots of people are looking forward these days to upgrade to Windows 10 <sarcasm off>. Personally, the only reason why I'm even writing about it is the new Wi-Fi Sense feature that has jumped over from Windows Phone to the desktop to freely share Wi-Fi passwords with your contacts in Outlook, Hotmail, Skype and Facebook. Great for convenience, bad for security, bad for me.

Bad for me because in the past I let friends access my Wi-Fi network at home. I'm sure I haven't changed my Wi-Fi password since because it's a pain to change it on all of my devices that use the network, especially printers and other devices that don't offer an easy way to change the password. But I guess I have to do it now because Windows PCs of friends with whom I've shared my password in the past that upgrade can now freely share it with their friends, i.e. people I don't even know, with a simple tick in a check box. Obviously, Microsoft get's to see and store it, too.

Fortunately it's not as bad as it sounds at first as users still have to tick a check box per Wi-Fi network they want to share credentials for. Ars Technica has a detailed description of how the feature works if you are interested in the details. Still, I think it's time for a password change at home.

Yes, I know, giving out my Wi-Fi password was never secure to begin with. This is why I have a guest Wi-Fi SSID and password for quite some time now that I can quickly switch-on and off as required. Another benefit of this solution is that I can change the SSID and/or the password every now and then so things don't get out of hand. This way even if friends decide to or accidentally share my guest Wi-Fi credentials with Microsoft and the world it's of little use to anyone as the guest Wi-Fi SSID is automatically shut down after a pre-configured time after the last user has left.

And that by the way also limits the damage done by those automatic 'backups' of private data to Google Servers that Android based devices perform every now and then.

How To Pretect Against IPv6 Leakage in an IPv4 VPN Environment – Part 3 (Raspi VPN Gateway)

In the previous two parts on IPv6 leakage in IPv4-only VPN environments I’ve taken a look at how things can be fixed on the client side (part 1) and on the network side (part 2). While being at conferences and in hotels I often use a Raspberry Pi Wi-Fi VPN client gateway to connect all my Wi-Fi devices to the local network with a single sign-in. Once connected the Raspberry Pi then establishes a secure VPN connection that is then used by all my devices. In other words, the VPN tunnel is not established from my PC but from the gateway. The big question is, does IPv6 leakage occur here as well?

A gave it a quick try and everything is o.k. Per default, Raspian does not activate IPv6 at all. When activated manually (sudo modprobe ipv6) the Raspi will request an IPv6 address on the backhaul interface. If it gets one it doesn’t share it with the local Wi-Fi link to which all my devices are connected. In other words, no bridge is created, no IPv6 leakage can occur and any traffic to and from all local Wi-Fi devices pass through the VPN tunnel.

Good, one thing less to deal with…

The Martian – A Book Review

O.k. this is a bit out of the ordinary on this blog, but apart computers and networks I’m also interested in spaceflight. Over the weekend I’ve come come across “The Martian” by Andy Weir due to a recommendation of an online ebook store and found it so outstanding that I had to write a few words about it here…

The story is about an astronaut in the near future stranded on Mars after a mission abort has gone horribly wrong and about his quest to survive until he can be picked up. Finally a book about Mars again that is not about astronauts encountering hostile aliens that want to kill them. What stands out is not only the storyline with lots of turns, twists and surprises, but also how closely the story is weaved around NASA’s plans to send humans to Mars and the technology existing and under development. Also, the writing style and the protagonist’s character and humor make this book a page turner. I usually take my time reading books but I went through this in two days (i.e. nights) flat. I just couldn’t put it down, it’s an extraordinary piece of work. 10 thumbs up!

LTE Roaming With 1 Mbit/s – They Must Be Joking…

This week, another network operator in Germany announced that they will start LTE data roaming on 1 August. For two reasons, however, the announcement is more than a little bit strange.

First, they are not only a year late to the game but they will only start with three foreign networks while their competition already have roaming agreements with several dozen networks around the globe. The second restriction is even weirder: LTE Roaming data speeds are limited to 1 Mbit/s and nobody knows what that is good for!? While my LTE data roaming speeds easily surpass 20 Mbit/s they want to limit their customers to 1/20 of that. No way this would work for me because I regularly use LTE data roaming for notebook tethering.

Hm, that makes me wonder if they are bundling a couple of old E-1 lines to connect to their roaming hub and are afraid of massive overload if customers want to use the technology as it was designed!? 😉 Sorry, an announcement like that needs to be ridiculed.

How To Pretect Against IPv6 Leakage in an IPv4 VPN Environment – Part 2 (Server Side Changes)

While being excited about the availability of IPv6 from my mobile network operator of choice the disadvantage that comes along with it is IPv6 leakage when using Witopia’s VPN service with OpenVPN on Ubuntu. For some strange reason they answer IPv6 DNS AAAA requests even though their product only tunnels IPv4 packets. My OpenVPN server setup at home on a Raspberry Pi had the same behavior so far but as it is under my own control I started looking for ways to change that.

At first it looked as if it would be a straight forward thing to implement. But the first look was a bit deceiving and in the end it took a bit of more tinkering before the DNS server queried through the VPN tunnel only returned DNS responses for IPv4 A-requests and empty results for IPv6 AAAA-requests.

The OpenVPN Server setup I have linked to above relies on a DNS server already present in the server’s network to also answer queries from remote OpenVPN clients. As there was no way for me to change that DNS server’s behavior I had to setup a separate DNS server on the OpenVPN Server Raspi and then reconfigure OpenVPN to point clients to that DNS server.

Bind Comes To The Rescue

On Linux, “bind” is one of the most popular DNS server. While it is probably overkill to use “bind” just as a DNS request forwarder it does offer a nice option to return an empty result for IPv6 AAAA-requests when sent over IPv4. Here’s some more background information on the option. For this option to work, bind needs to be compiled with a special flag to recognize the “filter-aaaa-on-v4 yes;” configuration option. Unfortunately, bind on Raspian does not come configured with it so I had to compile bind from scratch. That sounds more difficult than it is, however.

But perhaps your distribution set the correct flag before bind was compiled so my advice is to install bind from the repositories first and see if it works with the “filter-aaaa-on-v4” option. If it doesn’t it can be uninstalled before downloading and compiling bind from it’s source. Also, it has the benefit that all configuration files are already in place which are perhaps not put into the right directories when compiling from source.

Installing Bind From The Repositories

Installing bind from the repositories works with a simple “sudo apt-get upate && sudo apt-get install bind9” command. Afterward, uncomment and modify the following section in “/etc/bind/named.conf.options” to enable DNS query forwarding to the upstream DNS server used in the network:

    forwarders {
         8.8.8.8;
    };

Once done, restart bind to see if the configuration change has been accepted: “sudo service bind9 restart“. If no error messages are shown on the console, bind is up and running.

Check If the IPv6 Option Works

In the next step add the IPvt6 filter option to the same configuration file and also allow queries from other networks by inserting the two additional lines marked in orange below. This is necessary as the OpenVPN clients get IPv4 address from a subnet that is different from the subnet that the OpenVPN Server uses on the backhaul link:

options {
    directory “/var/cache/bind”;

    //…

    forwarders {
         8.8.8.8;
    };

    //…

    dnssec-validation auto;

    auth-nxdomain no;    # conform to RFC1035
    listen-on-v6 { any; };

    filter-aaaa-on-v4 yes;
    allow-query {any;};
};

Once done, do another restart of bind. If an error message is shown that the filter option is not supported some extra work is required. Otherwise, you are almost good to go and the only thing that is required for OpenVPN clients to query this DNS server instead of the previous one is to change the DNS option in the OpenVPN config file as shown at the end of this post.

Compile and Install Bind From Source

Before proceeding, uninstall bind again with “apt-get remove bind9“. While this removes the binaries, it leaves the configuration files including the one we have modified in place. Now download and install bind with the following commands as described here. As there might be a more up to date version of bind at the time you read this it might be worthwhile to modify the version number in the commands accordingly.

apt-get install build-essential openssl libssl-dev libdb5.1-dev
mkdir bind-install
cd bind-install
wget ftp://ftp.isc.org/isc/bind9/9.9.7/bind-9.9.7.tar.gz
tar zxvf bind-9.9.7.tar.gz

fakeroot ./configure –prefix=/usr –mandir=/usr/share/man –infodir=/usr/share/info –sysconfdir=/etc/bind –localstatedir=/var –enable-threads –enable-largefile –with-libtool –enable-shared –enable-static –with-openssl=/usr  –with-gnu-ld –with-dlz-postgres=no –with-dlz-mysql=no –with-dlz-bdb=yes –with-dlz-filesystem=yes  –with-dlz-stub=yes  CFLAGS=-fno-strict-aliasing –enable-rrl –enable-newstats –enable-filter-aaaa

make install

Some patience is required as the process takes around 45 minutes. But once done everything is ready and an “/etc/init.d bind9 start” will start the service, this time with the ipv6 filter in place as the configuration file we modified further above is still in place.

OpenVPN modification

The last step now is to tell the OpenVPN server to point new clients to this DNS server. This is done by modifying the push “dhcp-option DNS x.x.x.x” option in “/etc/openvpn/server.conf” file, with x.x.x.x being the IP address of the Raspberry Pi. A “sudo service openvpn restart” activates the change.

Verifying That Everything Works

The next time an OpenVPN client device connects to the server, all DNS requests for AAAA records are getting an empty response. This can be verified e.g. by typing in “dig youtube.com AAAA” which should return an empty result and not an IPv6 address. Another option is using Wireshark for the purpose.

And that solves my OpenVPN IPv6 leakage issue without any modification on the client side!

How To Protect Against IPv6 Leakage in an IPv4 VPN Environment – Part 1

Last year I had a post that one has to be careful about establishing an IPv4-only VPN tunnel over a network interface that has a public IPv4 and a public IPv6 addresses assigned to it. If the DNS server on the other side of the VPN tunnel returns IPv6 addresses and the network stack on the client side prefers IPv6, which is usually the case, then the connection establishment will not go through the VPN tunnel but right around it via the physical network interface.

Quickly said at the time and quickly forgotten again as well as IPv6 connectivity is still rare these days. But those days are over as my mobile network operator of choice now supports IPv4v6 connectivity. When tethering my notebook via my smartphone now it configures itself for IPv4 and IPv6. That also means that I immediately get unwanted IPv6 leakage while using my VPN.

Some Mac and PC VPN client software used by some VPN providers seem to have built-in protection against it. On my Ubuntu systems, however, the OpenVPN client unfortunately does not. The only way to fix this on the client side is to disable IPv6 permanently or temporarily.

Ipv6-local-configAs I’d like to use IPv6 in general, I don’t want to disable it permanently. A temporary alternative for Ethernet and Wi-Fi connections is to to restrict IPv6 to link-local use as shown in the screenshot on the left. The problem is, however, that a new Wi-Fi connection that one creates e.g. at a hotel or exhibition venue will have full IPv6 enabled again and it’s more than likely that one forgets to turn it off manually after initial connection establishment.

But why do DNS servers on the other side of an IPv4-only VPN actually have to return IPv6 addresses? I use Witopia for some scenarios and their DNS servers happily return IPv6 addresses. I wish they wouldn’t and it makes me wonder why they are doing it when their VPN service is limited to IPv4 anyway!?

Fortunately, I use my private VPN servers for most of my VPN needs. They also return IPv6 addresses but here I can change the behavior of the DNS servers behind the VPN server to only return IPv4 DNS results. As configuring that was a bit tricky I’ll make a separate blog post out of that. So stay tuned if you are interested!