Bad Internet Connectivity Makes Me Leave The Turkish Airlines Lounge

The Turkish Airlines Lounge in Istanbul is by all means one of the coolest places to stay at any airport around the globe. Well at least it was so far.  Apart from a nice interior one thing that is obviously absolutely crucial to me and many other business travelers is good Internet connectivity. And this is more and more difficult to get in that lounge.

While there is Wi-Fi in the lounge, OpenVPN and IPSec connectivity is blocked. No idea why but I’m probably not the only business traveler who is more than unhappy about this. At least I can use an SSH tunnel VPN that they (forgot?) to block to get my data safely through the network. Another option that has worked so far in the lounge is to tether my PC via a mobile device and one of the cellular networks there to the Internet. Unfortunately both times I’ve been there recently, Turkcell and Vodafone Turkey failed miserably.

Outside the lounge at the gates, both networks worked well so I decided to leave. Perhaps one of the companies involved in this cares and does something about the situation next time. Would be nice…

In-Flight Internet Reloaded On A Flight To Asia

china-flight-smBack in 2011 I had my first in-flight Internet experience over the Atlantic with a satellite based system. Since then I’ve been online a couple of times during national flights in the US where a ground based system is used. In Europe most carriers don’t offer in-flight Internet access so far but an LTE based ground system is in the making which will hopefully have enough bandwidth so support the demand in the years to come. When I was recently flying to Asia I was positively surprised that Turkish Airlines offered Internet access on my outbound and inbound trips. Free in business class and available for $15 for the duration of the trip in economy class I was of course interested of how well it would work despite both flights being night flights and a strong urge to sleep

While most people where still awake in the plane, speeds were quite slow. Things got a bit better once people started to doze off and I could observe data rates in the downlink direction between 1 and 2 Mbit/s. Still, web browsing felt quite slow due to the 1000 ms round trip delay times over a geostationary satellite. But it worked and I could even do some system administration over ssh connections although at such round trip times command line interaction was far from snappy.

In the uplink I could get data rates of around 50 to 100 kbit/s during my outbound leg which made it pretty much impossible to send anything larger than a few kilobytes. On the return trip I could get around 300 kbit/s in the uplink direction when I tried. Still not fast but much more usable.

Apart from web browsing and some system administration over ssh, I mostly used the available connectivity to chat and exchange pictures with others at home using Conversations. While being mostly available, I noticed a number of outages in the link ranging from a few tens of seconds to several minutes. I’m not sure by what they were cause surely not due to clouds or bad weather above the plane… 🙂

While overall I was happy to be connected I have to say that like in the US, this system is not offering enough capacity anymore and it will become more and more difficult to offer a good customer experience without bumping up speeds significantly.

Wi-Fi Hotspots With Real Encryption Without User Interaction

One of the major issues of public Wi-Fi hotspots is that they are usually unencrypted which makes users an easy target for eavesdropping. Some Wi-Fi hotspots use encryption but the PSK password is the same for all users. As a consequence an attacker that intercepts the authentication procedure can decrypt the traffic easily. This means that the only thing that can be achieved by using WPA2-PSK encryption in public hotspots is a weak form of access control by trying to keep the password within the group of authorized users. Good luck with that. Thanks to this post over at Heise (in German) I got aware that Dan Harkins of Aruba (now owned by HP) is trying to change this in the IEEE:

What Dan proposes in his “Opportunistic Wireless Encryption (OWE)” document presented back in September 2015 is to use a Diffie-Hellman Key exchange instead of WPA2-PSK when establishing a connection to the Wi-Fi Access Point. The difference between DH-Key exchange and WPA2-PSK is that the user does not have to supply a password and that an encrypted tunnel for which no shared secret is required is used to exchange a per-device encryption key. In other words, the proposed solution works in the same way as the key exchange used by https to secure web traffic today. No password needs to be given and the individual key that is exchanged through the encrypted tunnel ensures that an attacker can’t decode the traffic even if he intercepted the exchange (which is possible with WPA2-PSK). Two problems solved (no password, real encryption) at the same time.

Unfortunately it seems that there is no wide spread support for the idea yet. This document suggests there weren’t enough supporters in a meeting in January 2016 to include the idea in the next update of the 802.11 Wi-Fi standards. Let’s hope that this will still change as the current state of public Wi-Fi security is simply unacceptable.

How To Move From Typepad To WordPress

In a free and open web one would expect to be able to move one’s website from one service to another without too much hassle. But unfortunately many parts of the web are neither free nor open and making an escape with a blog from Typepad to a WordPress installation requires a bit of tinkering. While there are quite a number of reports by others of how to move away from Typepad exist on the web I thought I’d add my story as well because in the end it was less complicated than I thought. Overall, it took me about one and a half days to get things done. It could have gone faster but I wanted to experiment a bit to get exactly what I wanted. Read on for the full story.

Continue reading How To Move From Typepad To WordPress

Linux And A Good Backup Strategy Save The Day

When you travel a lot, chances are good that at some point your computing hardware fails without prior notice or gets stolen. It will happen, it’s just a question when and one is better prepared for it. In fact I was prepared and it paid back handsomely when a notebook under my care was stolen last week in Paris out of a backpack in a restaurant. First question of the owner: What will they be doing to my data? Second question: What shall I do now, I can’t work without the notebook?

Answer to the first question: They won’t do anything with your data, your notebook ran Ubuntu, it was encrypted to counter exactly this scenario and your password was long and diverse enough to withstand casual and less casual brute force attacks. And besides, those people were probably just interested in the hardware anyway… So rejoice you didn’t have Windows that doesn’t encrypt anything unless you have the Pro version… Yeah!

Answer to the second question (what shall I do now): 1.: Don’t panic (I’m sure you have a towel with you) and 2.: Don’t worry, the last backup of the system partition and the data partition are only 3 days old. That’s the amount of data loss you have. And 3.: Clonezilla restores your system on a new SSD in 15 minutes. Restoring your 600 GB of data to the user partition takes a little while longer but it will be done in time for me to catch that 6 am train to Paris to deliver the 1:1 replacement (minus 3 days worth of data).

So as sad as the story is, it’s great to have a working backup strategy that gets you back up and running in 15 minutes on totally different hardware with everything (still) installed and configured like on the “old” one. Thanks Clonezilla!

Now We Can Almost Switch-Off UMTS

Now that all German network operators have switched-on VoLTE for voice services on LTE and are transitioning their subscribers to VoLTE step by step I can visualize a UMTS switch off in the mid-term quite well. Agreed, were aren’t quite there yet but the list of reasons to keep 3G running has become significantly smaller:

One major aspect will be how quickly VoLTE actually takes off and thus reduces the need to fall back to 3G during voice calls. For the moment, network operators seem to move their subscribers to VoLTE step by step, some slower, some faster. In other words, even though VoLTE is now up and running, not everyone automatically uses it.

Also, to be able to use VoLTE, one needs an LTE smartphone with an embedded VoLTE client. For the moment, only network operator device variants, at least in Germany and I suppose in the rest of Europe as well, come equipped with VoLTE capabilities. Buy the same device outside of an operator store and it won’t come with VoLTE. That will change in the future once the dust has settled a bit and device manufacturers, operating system and chipset vendors start treating VoLTE as a black box but I think that it is still some time away. Two to three years seems a realistic time frame for me until VoLTE comes out of the box in every new VoLTE device outside operator stores but that’s just gut feeling.

And once that is in place, network operators have to wait a while until the “installed based” of non-LTE and non-VoLTE devices has thinned out considerably. Telenor in Norway says it expects all of this to happen until 2020 by which they want to switch-off their 3G network. And in 2025 they want to ax their GSM network as well. The timing is a bit tight but if a network operator accepts that voice fallback of non-VoLTE devices will be to 2G without data capabilities during the call than they can certainly meet these deadlines.

Why I Left Typepad For A Self-Administrated WordPress Blog

Welcome, this is the first ‘original’ post on WirelessMoves’ new platform. I’ve been a loyal Typepad customer for 10 years but there were a number of reasons that accumulated over time that finally made me finally switch to a self-installed and administrated WordPress instance in the cloud. In case you are interested in the details of why I switched, read on.

One thing that has bugged me for many years is that my $50 per year account at Typepad would not allow me to use my own domain name. I could have had my own domain linked to Typepad, of course, but after a few years without it it wasn’t appealing anymore to retrofit this later. Also, pricing for my own domain wasn’t that appealing either.

Next, there’s no way around the fact that my blog in 2015 still looked almost identical to how it looked like a decade ago. What was slick and modern at the time looks a bit rusty today, the world wide web and design has significantly moved on over time. Also, a mobile friendly design is a must have today and Typepad didn’t offer an answer for me here, either. In other words, Typepad seems to be pretty much in a maintenance only mode rather than trying to continue offering an appealing platform for content creators. Over the years the platform seems to have changed hands a couple of times and the current owner seems to have no intention of changing this sad fact.

On the technical side a number of gripes have accumulated as well. There’s no  IPv6 and, even worse, there is no secure http, not even in the writer’s user interface. While the log-in procedure is protected by https, the platform immediately falls back to http. Especially when using public Wi-Fi hotspots and other non-secure places this is a significant problems as the browser cookie giving me editing rights can be easily intercepted. Obviously I’m always using a VPN whenever I’m not at home but it should be in Typepad’s own interest to keep their customers safe.

Next in the list of things I really would have liked to have had is internal statistics about what is read on my blog beyond Typepad’s meager info of how many pages have been accessed per day. I did use an external service for this purpose for many years but it shouldn’t really have been necessary. Also, Typepad embedded Google Analytics in my blog without my consent for their own tracking purposes. And finally Typepad never offered a public search functionality for my blog. Sure, you can use Google or another search engine for the purpose but, again, it should be part of the platform.

So here we go, that’s the list and it makes me wonder why it took me so long to make the switch!? A self-administered WordPress installation fortunately offered a solution to each and every one of these issues when coupled with the right hosting platform, especially when it comes to IPv6 and https. In a previous post, I wrote about the cool features of Uberspace’s hosting platform and this is where I migrated my blog to. The domain name is in my hands, WordPress is open source and should I decide in the future that I don’t like it there anymore I’m free to go instantly.

Unfortunately, Typepad doesn’t make transferring a blog to another service exactly easy but I got it done in a day and a half. More about that in a follow up post.

32C3 – An Angel In Retrospect – Being Part Of The Conference Instead Of Just Attending

And here is one more post about my 32C3 experience at the end of last year in Hamburg. This was the first conference I did not only “attend” but was actually “a part of”. There is a big difference between the to approaches: Normal conferences are fully organized and you go there to listen to the talks, to meet and talk to people you already know and perhaps, if you are the communicative type, meet a few new people that share your interests. The annual CCC conferences are different in this respect because here attendees are encouraged to help with many different aspects of the conference from checking tickets at the entrance, be part of the wardrobe team, become a camera man, help people find their way around the congress, help people with their network problems, etc. etc.

One the one hand this helps to keep the ticket prices down because the 1.500 volunteers who signed up as congress “angels” put in 10.000 work hours and all of them did it voluntarily and for free. That saves a lot of money. Like me, many might not only have altogether altruistic motives to volunteer. Apart from being happy to help I became a congress angel to get a glimpse of how and by whom the event is organized and how things work behind the scenes. I signed up for a couple of camera shifts and in addition spent some time at the network help desk. Not only did I learn a lot about how the congress is run but I also met a lot of people during my network help desk shifts, both people seeking help and other network angels in the same shifts, who freely shared their ideas on the stuff they were having fun with during the less busy times (after all this was a hacker conference so there weren’t too many people who had network issues with their equipment they couldn’t figure out themselves). If I had just “attended” the congress I would have never met all these people and it wouldn’t have been half the fun it was! In other words, I’m fully hooked on the concept!

The crucial thing about becoming an angel at the congress and volunteering is that there is a system that makes it easy and flexible in the extreme. The major idea is that one is not assigned to do something but that one has complete control over what one wants to do and when. The place where work and volunteers come together is the web based “angel-system” that works equally well on big and small devices. Here, one can pick tasks and 2 hour timeslots before an during the conference that fit into one’s overall schedule. I took camera shifts for presentations I wanted to attend anyway and network help desk duties at times in which there was no talk I wanted to go. During the congress my plans changed slightly and I could re-arrange my shifts in the “angel-system” in a jiffy from my smartphone. A great system that gives the conference the volunteers it needs and the volunteers the freedom to assign themselves tasks to do and be in control. Wonderful!

I’m totally hooked on the concept and I feel encouraged to be part of the event even more next time rather than just attending. So if you plan to come to a CCC congress in the future, sign up as an “angel” before you arrive and have more fun!

LTE-A Pro for Public Safety Services – Part 3 – The Challenges

In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I’ve described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal. On the other hand there is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won’t be a silver bullet for the future of PMR services:

Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It’s unlikely this will significantly change in the future.

Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn’t stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don’t like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes ‘end of life’ and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.

Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.

Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.

Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.

ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.

Network Operation Models

As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here’s a comparisons:

A Dedicated PMR Network

  • Nationwide network coverage requires a significant number of base stations and it might be difficult to find enough and suitable sites for the base stations. In many cases, base station sites can be shared with commercial network operators but often enough, masts are already used by equipment of several network operators and there is no more space for dedicated PMR infrastructure.
  • From a monetary point of view it is probably much more expensive to run a dedicated PMR network than to use the infrastructure of a commercial network. Also, initial deployment is much slower as no equipment that is already installed can be reused.
  • Dedicated PMR networks would likely require dedicated spectrum as commercial networks would probably not give back any spectrum they own so PMR networks could use the same bands to make their devices cheaper. This in turn would mean that devices would have to support a dedicated frequency band which would make them more expensive. From what I can tell this is what has been chosen in the US with LTE band 14 for exclusive use by a PMR network. LTE band 14 is adjacent to LTE band 13 but still, devices supporting that band might need special filters and RF front-ends to support that frequency range.

A Commercial Network Is Enhanced For PMR

  • High Network Quality Requirements: PMR networks require good network coverage, high capacity and high availability. Also due to security concerns and fast turn-around time requirements when a network problem occurs, local network management is a must. This is typically only done anymore by high quality networks rather than networks that focus on budget rather than quality.
  • Challenges When Upgrading The Network: High quality network operators are also keen to introduce new features to stay competitive (e.g. higher carrier aggregation, traffic management, new algorithms in the network) which is likely to be hindered significantly in case the contract with the PMR user requires the network operator to seek consent before doing network upgrades.
  • Dragging PMR Along For Its Own Good: Looking at it from a different point of view it might be beneficial for PMR users to be piggybacked onto a commercial network as this ‘forces’ them through continuous hardware and software updates for their own good. The question is how much drag PMR inflicts on the commercial network and if it can remain competitive when slowed down by PMR quality, stability and maturity requirements. One thing that might help is that PMR applications could and should run on their own IMS core and that there are relatively few dependencies down into the network stack. This could allow commercial networks to evolve as required due to competition and advancement in technology while evolving PMR applications on dedicated and independent core network equipment. Any commercial network operator seriously considering taking on PMR organizations should seriously investigate this impact on their network evolution and assess if the additional income to host this service is worth it.

So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.

50+ Gbit/s for Vodafone Germany on New Year’s Eve

Like every year Vodafone has released numbers on mobile network usage during New Year's eve between 8 pm and 3 am as this is one of the busiest times of the year. This year, Vodafone says that 185 TB were used during those 7 hours. Let's say uplink and downlink are roughly 9:1 which would result in a total amount of 166.5 TB downloaded during that time. Divided by 7 hours, 60 minutes and 60 seconds and then multiplied by 8 to to get bits instead of bytes results in an average downlink speed at the backhaul link to the wider Internet of 53 Gbit/s. An impressive number, so a single 40 Gbit/s fiber link won't do anymore (if they only had a single site and a single backhaul interconnection provider, which is unlikely). Back in 2011/2012 the same number was 'only' 7.9 Gibt/s.

On the other hand when you compare the 53 Gbit/s for all Vodafone Germany customers to the 30 Gbit/s reached by the uplink traffic during the recent 32C3 congress or the sustained 3 Gbit/s downlink data rate to the congress Wi-Fi generated by 8.000 mobile devices, the number suddenly doesn't look that impressive anymore. Or compare that to the 5000 Gbit/s interconnect peaks at the German Internet Exchange (DE-CIX). Yes, it's a matter of perspective!

If you've come across similar numbers for other network operators please let me know, it would be interesting to compare!