How To Simulate an IP-Adress Change On A NAT WAN Interface

The vast majority of Internet traffic is still over IPv4 and when I look at all the different kinds of connectivity I use for connecting my PC and smartphone to the Internet, there isn't a single scenario in which I have a public IPv4 address for those devices. Instead, I am behind at least one Network Address Translation (NAT) gateway, that translates a private IPv4 address into another IPv4 address, either a public IPv4 or another private one in case several NATs are cascaded. While that usually works well, every now and then one of those NATs change their IP address on their WAN interface which creates trouble in some use cases.

Applications that use the TCP transport protocol quickly notice this as their TCP link gets broken by the process. Higher layers are notified that the TCP link is broken and apps can reestablish communication to their counterpart. Apps using UDP as a transport protocol, however, have a somewhat harder time. UDP keep-alive packets sent in one direction to keep NAT translations in place are not enough as they just end up in nirvana. Bi-directional UDP keep alive packets also end up in nirvana without the application on top ever being notified about it. The only chance they have is to implement a periodic keep-alive 'ping' and a timeout after which the connection is to be considered broken.

Recently I had to simulate such behavior and wondered how to best do that. Again a Raspberry Pi acting as a Wi-Fi access point, NAT and Ethernet backhaul served as a great tool. Three shell commands are enough to simulate an IP-address change on the WAN-interface:

Do a 'sudo nano /etc/dhcp/dhclient.conf' and insert the following line:

    send dhcp-requested-address 10.10.6.92;

The IP address needs to be replaced with an IP-address of the subnet of the WAN interface but different from the one currently used. Obviously that IP address must not be used by any other device on the WAN subnet which can be checked by pinging the IP address before changing the configuration file.

Next, disable the dhcp client on the WAN interface with

   dhclient -r -v

The interface itself remains up so Wireshark does not stop an ongoing trace but IP-connectivity is removed. If the scenario to be simulated requires a certain time before a new IP address is available, just pause before inputting the next command.

A new IP address can then be requested with

   dhclient -v eth0

The result returned by the command shows if the requested IP has been granted or if the DHCP server has assigned the previous IP address again or perhaps even an entirely different one. If the DHCP server has assigned the old IP address, changing the MAC address after disabling the dhcp client will probably help.

And that's all there is to it except for one 'little' thing: If the Raspberry Pi or other Linux device you perform these actions on are yet again behind another NAT, the server on the Internet will not see an IP address change but just a different TCP or UDP port for an incoming packet. So while in some scenarios the connection will still break there are potentially scenarios in which a connection can survive. One example is an ongoing conversation over UDP. If the NATs assign the same UDP port translations on all interfaces again, which can but does not necessarily have to happen and the first UDP packet is sent from the NATed network, the connection might just survive. I can imagine some TCP survival scenarios as well so don't assume but check carefully if the exercise produces the expected changes in IP address and port nubmers for your network setup.

Have fun!

Who Is Interested In ‘Mobile’ and ‘Desktop’ Convergence Like I Want It?

For the last couple of years a number of companies have been trying to find a way to converge the operating system and user interfaces of mobile and desktop devices. Perhaps time is getting a bit scarce now as smartphone processors come pretty close to the computational power, memory size and graphics capabilities as full grown desktop PCs. Sure their screen and battery is smaller but at some point it will be trivial to interconnect them with a bigger screen, keyboard and mouse and ask them to be the 'desktop'. Perhaps we reach this point with tablets first? But what kind of operating system will run on it?

With almost the screen size of a small notebook the only things thet are missing in a tablet product we use today is a hardware keyboard and a mouse. Apple is getting closer and closer to this point with the latest Macbook 2015. Running a full iOS, it nevertheless is pretty much a tablet with a keyboard attached due to its thinness and use of only a single USB 3.1 connector. Unlike a tablet however, it runs full iOS. But the keyboard is attached to the screen and the graphical user interface is still geared towards keyboard and touchpad.

Microsoft is also on it's way with the Surface line of notebook / tablet hybrids, even though commercial success is nowhere to be seen yet. Their Surface notebooks / tablets are now also running a full Windows operating system on a tablet sized device with removable keyboard with an x86 processor, so that is perhaps even closer to a converged device than the Macbook desribed above. I don't like the Windows 8 style graphical user interface and closed source is not my piece of cake either but they are definitely innovating in this space.

The third player in the desktop/mobile space is Google with Android and Chromebooks. While I like the fact that Chrome OS runs on Linux, the idea that everything is in the Google cloud makes their vision of a combined mobile/desktop future not very appealing to me. I can imagine my data to be stored on my own cloud server at home but I'm not yet willing to give up the huge advantages of on-device data and application storage when it comes to speed, security and being able to get work done in places where Internet connectivity is not present or too slow.

So perhaps it's time now to get hold of a 'Surface' and install Ubuntu on it to see how usable the Unity graphical user interface or perhaps KDE or something else is on a tablet once keyboard and mouse are removed!?

Windows 10 And Wi-Fi Sense – Consider Your Network Compromised?

<sarcasm on> I'm sure lots of people are looking forward these days to upgrade to Windows 10 <sarcasm off>. Personally, the only reason why I'm even writing about it is the new Wi-Fi Sense feature that has jumped over from Windows Phone to the desktop to freely share Wi-Fi passwords with your contacts in Outlook, Hotmail, Skype and Facebook. Great for convenience, bad for security, bad for me.

Bad for me because in the past I let friends access my Wi-Fi network at home. I'm sure I haven't changed my Wi-Fi password since because it's a pain to change it on all of my devices that use the network, especially printers and other devices that don't offer an easy way to change the password. But I guess I have to do it now because Windows PCs of friends with whom I've shared my password in the past that upgrade can now freely share it with their friends, i.e. people I don't even know, with a simple tick in a check box. Obviously, Microsoft get's to see and store it, too.

Fortunately it's not as bad as it sounds at first as users still have to tick a check box per Wi-Fi network they want to share credentials for. Ars Technica has a detailed description of how the feature works if you are interested in the details. Still, I think it's time for a password change at home.

Yes, I know, giving out my Wi-Fi password was never secure to begin with. This is why I have a guest Wi-Fi SSID and password for quite some time now that I can quickly switch-on and off as required. Another benefit of this solution is that I can change the SSID and/or the password every now and then so things don't get out of hand. This way even if friends decide to or accidentally share my guest Wi-Fi credentials with Microsoft and the world it's of little use to anyone as the guest Wi-Fi SSID is automatically shut down after a pre-configured time after the last user has left.

And that by the way also limits the damage done by those automatic 'backups' of private data to Google Servers that Android based devices perform every now and then.

LTE Roaming With 1 Mbit/s – They Must Be Joking…

This week, another network operator in Germany announced that they will start LTE data roaming on 1 August. For two reasons, however, the announcement is more than a little bit strange.

First, they are not only a year late to the game but they will only start with three foreign networks while their competition already have roaming agreements with several dozen networks around the globe. The second restriction is even weirder: LTE Roaming data speeds are limited to 1 Mbit/s and nobody knows what that is good for!? While my LTE data roaming speeds easily surpass 20 Mbit/s they want to limit their customers to 1/20 of that. No way this would work for me because I regularly use LTE data roaming for notebook tethering.

Hm, that makes me wonder if they are bundling a couple of old E-1 lines to connect to their roaming hub and are afraid of massive overload if customers want to use the technology as it was designed!? 😉 Sorry, an announcement like that needs to be ridiculed.

How To Configure OpenVPN So I Can Return to LTE

On my commute to work I make good use of the excellent network coverage along the railway track. LTE coverage is almost perfect, but just almost as there are a few locations where my data session is handed over to the 3G network. Once on 3G the only way back to LTE, at least for now, is for the network to set the device into Cell-PCH or Idle state so it can search for LTE and return autonomously. That unfortunately doesn't happen in my case as my OpenVPN Server sends a UDP keep-alive packets every 10 seconds, thus preventing my smartphone I use for tethering to return to LTE. It's not that big of a deal as 3G is quite fast as well so I hardly notice the difference. But I'm a perfectionist… So I had a closer look at the OpenVPN sever configuration (in /etc/openvpn/server.conf) and noticed an option for keepalive timers:

keepalive 10 120

The "10" suspiciously looked like the 10 seconds interval that keeps my 3G connection in Cell-DCH state. After changing the line to

keepalive 30 120

the UDP keepalive packets are now spaced 30 seconds apart. That's more than enough time for the network now to set my device to Cell-PCH or Idle state, which in my case, happens after around 12 seconds of inactivity. Shortly after, my tethering smartphone then changes back to LTE.

Perfect! And on top of all this I might even save some battery power as fewer packets are sent and received now.

 

What has Changed In Mobile Computing Since 2009?

2008-2015In a previous post I wrote about what has changed in desktop computing in the last 6 years. In summary not very much, I still use my notebook from back then for some purposes with an up to date operating system for multimedia consumption.  So what about mobile computing and mobile devices, how have things evolved in this domain in the same time frame?

Back in 2008 I wrote a review of how well an entry level phone at the time, a Nokia 5000, could be used for email and web browsing. Back then, the point was to show that even with an entry level device, it had become possible to surf the web and use the device for email communication. It was a sensation. So let's have a look at how the 7 year old Nokia 5000 compares to a similar device that can be bought today.

Price

For my comparison I chose the Android based LG D160, released back in 2014 and which is currently available for around 56 euros, contract free, VAT included. That is only around 60% of the price I paid for the Nokia 5000 at the time, which cost 90 euros. I could have made a comparison to a device that also costs 90 euros today but I wanted to compare two entry level devices and the cost of such a device has come down significantly over the years.

Connectivity

At the time, being able to browse the web with an entry level device was spectacular, today it's a given, nobody would think otherwise anymore. Back then I used Opera Mini with a compression server in the cloud to reduce the size and complexity of the web page. This was necessary on the one hand because the Nokia 5000 only came with a 2G EDGE network interface that could at best transport around 250 kbit of data per second. 3G networks did exist at the time and already covered bigger cities, but entry level devices were still limited to 2G networks. Compression was also necessary due to the processing power and memory having been quite low on the Nokia 5000 compared to today's devices. The LG D160 of 2014 on the other hand comes equipped with a 3G HSPA network interface with data transfer speeds of up to 14.4 Mbit/s. LTE networks are available nationwide today but it's the same story as with 3G for the Nokia 5000 then, LTE hasn't moved down into entry level category yet. What is included today that was considered a high end feature at the time is Wi-Fi, so the device can be used at home without a cellular network. Also, the device supports tethering, so it can be used as a gateway to the Internet for a notebook or tablet on the move.

Screen and Web Browsing

The image on the left shows the Nokia 5000 and the LG D160 side by side and next to a Samsung Galaxy S4, a flagship device back in 2013. While the Nokia 5000 back in 2008 came with a 320×240 pixel screen capable of 65k colors, the LG D160 now has a 320×480 pixel screen with 16 million colors. By today's standards that is a very low resolution but compared to 2008 it is still twice the number of pixels. Opera is still my browser of choice but I have moved-on from Opera Mini to Opera, a full web browser that no longer requires a compression server on the backend as the device has enough RAM and processing power to show mobile optimized and even full web pages without any magic applied in between. At the time it took around 12 seconds to launch the browser and there was no multitasking. Still acceptable then but today, the browser launches in 4 seconds and even stays in memory if no other big apps are running despite the 512 MB RAM, which is a massive amount compared to 2009, but rather little today. GSMArena doesn't even specify how much RAM was built into the Nokia 5000 but the 12 MB of Flash memory for file storage compared to the 4 GB in the D160 today are a pretty good indication of what it must have been. Another aspect I focused on at the time was how fast and smooth scrolling and I noted that compared to the flagship Nokia N95 at the time it was slower and not as smooth. Still usable was the verdict. Today, scrolling of normal web pages via a touchscreen is quite smooth on the D160 and light-years away from what was possible on entry level devices in 2008/9.

eMail

At the time, the email client in the Nokia 5000 was quite rudimentary, with important options such as partial downloads missing. Also, there were few if any email apps for non-smartphone devices at the time to improve the situation. Today, even the 40% cheaper D160 easily runs sophisticated email clients such as K9 mail that, apart from a proper PGP implementation, leaves little to be desired.

Camera, Navigation and Apps

When it comes to built-in cameras, the Nokia 5000 from back in 2009 has a 1 MP camera at the back while today's D160 has a 3 MP camera built in. Both take pictures but they would both be rated pretty much worthless by the standards of each period. But still, the camera is significantly better at a much reduced price compared to 2009. One big advantage of today's entry level smartphones compared to 2009 is the built in GPS chip for a variety of uses from finding the closest Italian restaurant to car navigation. I didn't install Osmand on the D160 but Google maps pinpointed my location in seconds and presented me with a route to a destination almost instantly. An incredible improvement over the state of the art in 2009 in this price category. I mentioned the price tag on purpose as Nokia Maps with car navigation existed in 2008/9 (see here and here) but could only be used on much more expensive Symbian OS based devices. And a final point to make in this review is the availability of apps now and then. Few apps and games existed for entry level devices back then. Today, even the very low cost D160 can handle most Android apps and many if not most games (I'm no expert when it comes to gaming). Also, SMS messaging is quickly dissapearing with most people not caring about privacy and using Internet based multimedia replacement solutions such as WhatsApp instead.

Summary

So while I still use the notebook I bought back in 2009 with the latest operating system version on the market today, the entry level phone from back then is so outdated by today's entry level state of the art that I find quite shocking. Incredible how things have advanced in mobile in this short amount of time.

What Has Changed In Desktop Computing Since 2009?

When I recently checked out a "very low end" smartphone of 2015 I couldn't help noticing how vastly different and improved things are compared to smartphones sold a couple of years ago. I'll write a follow up article about this but I think the scene should be set first with a comparison: What happened in desktop/laptop computing since 2009?

I chose 2009 for this post as this was the year I bought a 17" laptop mainly for stationary use to replace an aging tower PC. Since my usage became more mobile since then I had to replace this laptop for everyday use with a smaller device in the meantime. Nevertheless I still use that laptop today, 6 years later (!), for streaming Netflix, Youtube and other things. So while I still use this 6 year old computer any phone from that era has long gone to digital oblivion.

So is that 6 year old laptop old and outdated? I guess that depends on how you look at it. At the time I bought the laptop for 555 euros with an Intel Core 2 Duo processor, 4 GB of RAM, a 256 GB hard disk, USB 2, a 17" display and Windows Vista. Even if I hadn't upgraded the machine, Windows Vista pretty much looks like Windows 7 which is still widely used today. I could even upgrade the machine to Windows 8 or Windows 10, to be shipped in a few weeks from now and it would still run well on a 4 GB machine. As a matter of fact, many low end laptops sold today still come equipped with 4 GB of RAM. Hard disk sizes have increased a bit since then, USB 3 ports are now standard, CPUs are perhaps twice as powerful now (see here and here) and the graphics capabilities required for gaming are more advanced. But for my (non-gaming) purposes I don't feel a lot of difference.

As I switched to Linux in the meantime my software evolution path was different. Windows was banished from the machine at some point and replaced by Ubuntu Linux. Ubuntu's graphical user interface looked different in 2009, a lot of eye candy has been added since then. Today I run Ubuntu 15.05 on the machine and I upgraded to a 256 MB SSD which makes it in effect look no different from my almost up to date notebook. It also still behaves pretty much the same when it comes to program startup and reaction times. The major difference is that the fan is louder compared to my current laptop due to the still higher power requirements of laptops of the 2009 time frame compared to today's machines.

So what has changed since 2009 in the laptop world? Prices have certainly come down since then a bit and many people these days buy laptops in the €300 to €400 range (taxes included). Technical specs have improved a bit but the look and feel is pretty much the same. Companies have started experimenting with touch screens and removable displays to create a more "tablet-like" experience, trying to import some of the fascinating advances that have happened elsewhere since. But that's still a niche at best. In other words, hardware and software evolution on the desktop have very much slowed down compared to the 1990's which was the second half of the home computer area and the decade of the rise of the PC and Windows. Things already slowed in the 2000's but that decade still saw the rise of easier to use Windows and prices for laptops coming down significantly.

Now try to remember what kind of mobile phone or smartphone you had in 2009 and compare that to what you have today and you'll see a remarkable difference to the story above. More about that in a follow up post.

LTE-only when 3G gets crowded…

While 3G networks are still doing pretty well in most parts of Europe due to HSPA+, 64QAM, dual-carrier, etc. etc., I was recently at an airport where the 3G cell covering my location seemed to have severe uplink congestion problems. Ping times were normal while only little data was transmitted in the uplink direction but immediately skyrocketed to several seconds whenever the uplink was somewhat more loaded with screen sharing and a VoIP calling. A bit of a let down.

But then I remembered that my phone I used for tethering was on LTE just a couple of minutes before and must have been redirected to 3G due to a low signal level. So I decided to lock the phone to LTE-only with an app I discovered recently. Who needs circuit switched mobile telephony anyway…!? Despite the signal level being really really low (a single signal bar was just barely shown every now and then), both uplink and downlink were much faster than what I could get over the 3G cell that was very strong in my location. Signal strength isn't everything.

Generally, I think the network operator bases thresholds for moving between network technologies are a good thing to rely on. In some cases such as this one, however, I'm glad I can make the choice myself.

VoLTE Roaming – From RAVEL to REVOLVER

Many network operators these days are trying to get their VoLTE system off the ground in their home countries so perhaps by 2016 we'll finally see a significant number of networks using the system beyond the few that are silently up and running today. While VoLTE at the beginning will only work in the subscriber's home country, many network operators are now thinking about implementing the next step which is to also offer VoLTE when the user roams abroad instead of falling back to CS-fallback to 2G or 3G networks for voice calls. Problem is, it adds quite some complexity to an already very complex system.

The solution favored by many so far is to have VoLTE work abroad in pretty much the same way as circuit switched calls. The concept is referred to as RAVEL (Roaming Architecture for Voice over IMS with Local Breakout) and LBO (Local Breakout) and its core idea is to use part of the IMS infrastructure in the visited network (i.e. the P-CSCF) that then communicate with the S-CSCF in the home network. Further, calls can be routed directly to another subscriber instead of going back to the home network first. Docomo wrote a good article with further details that can be found here. One of the advantages of the approach is that the P-CSCF has interfaces to the visited core and radio network and can thus establish a dedicated bearer for the speech path and hand-over the call into a circuit switched channel when the subscriber looses LTE coverage. The downside of the approach is that interaction of the P-CSCF with the IMS in the home network is not a trivial matter.

As a result, network operators have started thinking about a simpler solution in the GSMA REVOLVER group which has resulted in a 3GPP study item referred to S8HR (S8 Home Routing). S8 is the packet switched interface for LTE between a home network and a visited network. The 'Home Routing' part of the abbreviation already indicates that this solution is based on routing all IMS related things back into the home network without any involvement whatsoever of IMS network components in the visited network, thereby drastically reducing VoLTE roaming complexity. In fact, apart from the MME having to set a parameter in the Attach accept message, the visited network is not aware of the UE's VoLTE capabilities and actions at all, everything is sent transparently to the P-CSCF in the home network via the home network's PGW. In other words, IMS signaling and voice traffic takes the same path as other LTE data from roaming subscribers today. Another interesting thought: VoLTE roaming via S8HR would be like an OTT (over the top ) service…

Needless to say that reduced complexity results in a number of disadvantages compared to local breakout. Another Docomo paper, an article by Telecom Italia and a recent post over at the 3G4G blog give a good introduction. One major issue is how to handle emergency calls by roaming subscribes. The challenge of emergency calls for the network is to direct the call to a local emergency responder (e.g. the local police station). As S8HR does all things related to IMS in the home network there is no way to do that. As emergency calling is a regulatory requirement and unarguably an important feature it needs to be dealt with. The simplest solution is to instruct the mobile to do a CS-fallback call in case of an emergency. A more complex solution is to use the IMS in the visited network for emergency calling. But I wonder if the additional complexity is worth the more elegant solution? After all, 2G or 3G network overlays will be present in most parts of the world for a very long time to come, so why bother? Or if one bothers, perhaps bother later?

The second, equally problematic drawback is that calls in the visited network can't be handed over to a circuit switched channel (SR-VCC) when the subscriber runs out of LTE coverage. Again, the IMS in the home network has no way to communicate with network components in the visited network. 3GPP is investigating solutions but it's likely that in case they come up with something it's not going to be a simple solution. Perhaps S8HR is not less complicated with SR-VCC support than RAVEL? It remains to be seen.

The big question is whether not supporting SR-VCC is a showstopper for S8HR? After all the OTT competition (Skype, etc.) can't do it either. But I suspect it's going to be a showstopper for many network operators as this is a clear disadvantage over traditional circuit switched voice roaming. On the other hand, mobile devices could have an option for the user to disable VoLTE roaming if they are really bothered by it. I suspect most people won't as SR-VCC mainly plays a role in high mobility scenarios, e.g. in moving cars and trains. One could even think about putting logic in mobile devices to detect roaming and high mobility scenarios and then preferring CS calls over VoLTE if S8HR is used. That would push the issue from the network to the mobile side, but still, perhaps it is worth a relatively minor effort on the mobile side instead of going to great lengths to implement it on the network side. And again, after all, the competition can't do SR-VCC in the first place…

Tomi Ahonen: What if Microsoft sold Nokia back to Nokia?

Until September 2010 I must have been one of the most outspoken and enthusiastic Nokia fan around. The future was great, the future was bright, Nokia was embracing open source and promised to migrate from the somewhat aging Symbian OS to the open source Meego operating system for its upcoming devices. The day Nokia announced that an 'Ex'-microsoft manager is to become the new CEO of Nokia was more than just a shock for me. How could an 'Ex'-Microsoft manger possibility continue with open source!?

The day of the 'burning platform' memo marked the not quite unexpected but still abrupt end of my Nokia fandom. Meego and open source to be abandoned and to be replaced by a closed source Microsoft Windows Mobile, you can't imagine a more drastic 180 degrees in a company strategy and what I would have liked Nokia to do for me. 5 years later Nokia is no more, bought and destroyed by Microsoft and now completely written off from their books due to a complete failure to make Windows Mobile a success.

While I don't mind that Windows is not getting a foothold in mobile it's a shame Nokia and its great ideas withered away. While most of the tech press has already written of Nokia the smartphone company, my favorite mobile analyst Tomi Ahonen has three great scenarios for Nokia the smartphone coming back but not as a Microsoft subsidiary but as a part of Nokia the network infrastructure company. Microsoft wants to get rid of what's left of Nokia mobile while the original and still existing Nokia wants to make a comeback in mobile. Based on lots of insight and historical knowledge it's a brilliant piece of analysis of what could now happen. I just wished those in charge would listen and show dome good sense, at least this time…