3G Experiences in St. Petersburg – Russia – in 2014

Mts-simLast year I had a devastatingly bad 3G experience in St. Petersburg and when returning this year I was a bit skeptical of how good my connectivity would be when not at the hotel or when using one of the almost ubiquitously distributed Wi-Fi hotspots in the city. So here's a little report of how it worked out this time:

Instead of a Beeline I went for Internet connectivity of MTS this time. Getting the SIM card took about 15 to 20 minutes of intensive interaction of the sales person with her computer terminal, still quite a different exprience compared to other countries where working SIM cards can be drawn out of machines at the airport in about 10 seconds.

Anyway, once activated I got reliable service inside the city and I saw sustained data rates of up to 6 Mbit/s in the downlink direction and around 1-2 Mbit/s in the uplink direction. That's solid service I would say. Thanks to Google's global APN lists in Android, there's also nothing to configure anymore on the mobile as the APN was set automatically to 'internet.mts.ru'.

Last year I came here with my Nokia N8 and could take great pictures but was a bit hampered distributing Internet connectivity to my other devices in the few cases I could get usable cellular connectivity. This year I had a couple of low- to mid range Android devices in tow and while Wi-Fi tethering capabilities made connectivity sharing easy, picture quality compared to last year visibly suffered. I just wished I could have it all in one (affordable) device. Next year perhaps.

Pricing for Internet is also quite affordable. I paid a total of 350 Ruble (around 7 Euros) for the SIM card including 300 Ruble balance on it. Out of those, unlimited Internet access for a month cost 250 Rubles, i.e. around 5 Euros. The daily cap is set to 100 MB after which the data rate is throttled. More than sufficient for my purposes. For the remaining 50 Rubles I sent a couple of SMS messages and made some local calls. Altogether a very positive experience!

P.S. While LTE seems to be available from MTS in Moscow and 4G is already printed on the SIM cards it's not yet available to the public in St. Petrsburg. However, there are already three LTE networks on air so it can't take long anymore now.

Raspi, Ubuntu and Co and How They Compare to Cooking vs. Instant Food

The title of this post is perhaps a bit long winded but it contains an interesting analogy I've recently come up with when I was explaining the concept of hosting my own cloud services at home to a friend rather than using similar services of large Internet based companies.

Having my own cloud services at home and maintaining them vs. using services of Internet based companies to me is like doing some real cooking versus just consuming instant food. Internet based companies offer services that are relatively easy to install and use, i.e. that's quite similar to instant food that heats up in microwave in a few minutes. There's a price to pay, however, less tasty in case of fast food, perhaps, and sharing your private data for analysis and use by an Internet company in the other.

Hosting your own services at home is obviously more work just as preparing a meal from fresh ingredients is more work. However, my data remains private and secure which is similar to a self prepared meal which is more tasty if the cook knows what he is doing.

I like the analogy because it also fits when people say "great that you do it but I lack the skills and time of doing it myself". I hear the same argument when people talk about instant food vs. cooking. Sure If you've never stood in front of a stove before, learning how to cook is likely to be a challenge and someone to help getting you started is surely a good thing. The same is true for hosting your own services at home or switching to an opern source operating system such as Ubuntu Linux.

To summarize: Yes, when it comes to using cloud based services and operating systems I definitely prefer cooking to instant food.

IPv6 Is Nice But It Circumvents My VPN Tunnel

I like IPv6 and I think it’s going to be a big help to overcome the problem NAT (Network Address Translation) causes for self hosted services at home for the average user. But on the way to full IPv6 support there are a couple of pitfalls one needs to be aware of. When I am not at home I use an IPv4 based VPN tunnel back to my home network and from there to the Internet to make sure Deep Packet Inspection and eavesdroppers on the Wi-Fi link are thoroughly frustrated. But if the network supports IPv6, packets to and from IPv6 capable web sites do not go through the IPv4 VPN tunnel but are exchanged directly between my computer and the website as I recently had to experience. The only way to fix this is to have a VPN for both IPv4 and IPv6. Unfortunately, both my VPN gateway at home and my DSL line do not yet support IPv6. Definitely a chink in the armor one has to be aware of.

Leaking Intercepted Phone Calls Become ‘En Vogue’

I am amazed at the current steady flow of intercepted phone calls that are leaked in some form or shape. Take the phone calls of the Turkish prime minister, the leaked phone call of the US diplomat who found quite strong words for the foreign politics of the EU or the phone call between the EU foreign representative and the Estonian foreign secretary discussing the situation in the Ukraine as prime examples. What few reports are asking is who intercepted those calls, who might have leaked them and what was their motive in doing so? Leaving the political questions aside in this post I think it is save to assume the majority of those phone calls were not lucky intercepts by a teenager but the work of professionals in the employment of one state or another. Also, I think it's a safe assumption that these leaks are just the peak of the ice berg. A nice thing of leaked phone calls is that they litterally speak for themselves compared to documents who's authenticity is much harder to prove to the public.

In the meantime it must dawn on most politicians that they simply have to assume that the majority of their calls are intercepted and recorded if no end-to-end encryption is used. Not that they don't want to use devices that offer end-to-end encrypted calls but from what I can tell they are still cumbersome to use and interoperability between devices of different makers is virtually non-existent. But perhaps this constant flow of leaked phone calls will trigger people to rethink their position and create a bigger demand for interoperable and easy to use devices and applications for end-to-end encrypted calls that are not only affordable for them but for the general public as well. I for one would welcome it as I think it's not only politicians who in the meantime have no privacy anymore when making a phone call.

NAT Is The Main Inhibitor For Self Hosted Cloud Services

Lots of people I talk to like the idea of having a box at home that can be accessed remotely from notebooks, smartphones and tablets to synchronize private data such as calendar and address book information. They' like it because they'd rather like to have their private data at home than to give it up to companies that store it in a country far far away and make money by analyzing their data and selling advertising in some form or shape. Sooner or later, however, there's always a sentence like 'yes, you [Martin] can do it, but I have no idea how to go about it'. At that point I'd really like to say, 'gee, no problem, just buy box 'X', connect it to your DSL or cable router at home and you are done. Unfortunately that's just not were we are today.

To make self hosted services for the masses a reality, however, it's exactly such a plug and play setup that is required. Anything less and it won't work. I have no problem to imagine how most setup steps could be automated. A company could take open source software such as Owncloud, package it on inexpensive hardware such as a Raspberry Pi or even a NAS disk station at home and write an intelligent setup software that automatically does tasks such as registering a DynDNS domain, registering an SSL certificate for the domain and coming up with really simple to configure mobile OS connectors for calendar and contact synchronization. Money can be made with all of these steps and if reasonably priced I think there's a market for this.

But there are also some technical hurdles that are a bit more tricky. The major one is Network Address Translation (NAT) in DSL and cable routers at home today. For tech savvy users it's obviously easy to configure a port forwarding rule but for the average user it is an insurmountable obstacle. The Universal Plug and Play (UPNP) protocol implemented in most residential gateways offers the means to do this automatically and is used by programs such as Skype. Unfortunately this functionality is a significant security risk and many router vendors and DSL/cable network operators have decided to disable it by default. On top, many DSL and cable networks today no longer assign public IPv4 addresses to residential connections, hence preventing even tech savy people to have their servers at home.

Basically, I can see two solutions for this: One would be to have Owncloud and other services integrated into the DSL/cable routers. That's probably the easiest way but it would limit the opportunity to the few companies working on residential routers. The other solution could be for the home cloud box to establish a VPN tunnel to an external service from which it gets a public IPv4 address. Possible, but not ideal as it would introduce a single point of failure.

So perhaps IPv6 will come to the rescue at some point!? Unfortunately, that help will not come tomorrow. In addition I can't help but wonder if DSL/cable routers will not include IPv6 firewall functionality at some point to block incoming connections for security reasons. If so, we are back to step 1 for which we need a clever, secure and standardized way to automate initial connectivity configuration.

Plan-B Tales About My Home Cloud

One tiny downside of running cloud based services at home such Owncloud files, calender + address book synchronization, VPN services, Instant messaging server, etc. at home is that one becomes dependent on the power company and Internet provider to keep you connected to your services when you are not at home. And every now and then things go wrong. Back in December I had a two hour power outage that I managed to detect with my GSM enabled power socket that sent me an SMS once power was restored so that angle is covered. To survive DSL outages I have a fallback solution over wireless in place. And that's just what I needed recently when my DSL line failed for two days.

While it worked rather well it also demonstrated just how many self hosted services I used today and for which the fallback solution ensured service continuity and for which it didn't. So here's the story:

In addition to the DSL router for normal operation I have a cellular router in place for backup Internet connectivity over a different default gateway IP address. The router also registers a backup dynamic DNS address so I can still access the network remotely when the DSL line fails. One more thing I need to switch my services to the backup line is a way to remotely switch the default gateway addresses of my servers away from the DSL router during an outage and towards the cellular router For this purpose I use a secure shell (ssh) login on a box in the network that I can reach over the cellular connection. For this purpose I have a separate Raspberry Pi to which I have enabled port forwarding from the cellular router over a non-standard TCP port so I can securely reach it via SSL using the backup dynamic IP address. Once I'm logged into this machine I can ssh into my other routers to change the default gateway and DNS server and then restart the network stack on them.

The last thing that remains to be done during a DSL outage is to switch the dynamic DNS domain I used for my services away from the DSL router and towards the cellular router. Once that is done I have my main services back in operation. In addition, I can use the Raspberry Pi's vncserver to remotely get a GUI on a machine inside my home network and use a browser to access the web interface of the routers for maintenance. Again, the SSL connection helps to securely access the VNC server and I'll describe in a second post how that works.

So while this works very well there are a number of quirks:

The first is that most cellular network operators do not assign public IP addresses (anymore) which is, however, a requirement for this to work. Fortunately my cellular operator has a dedicated APN but it seems to be a rarity these days.

The second thing that makes the use of the backup solution somewhat of a pain in practice is that the cellular router doesn't recognize that when I'm at home and use my domain name to access my cloud services it should loop back the packets internally instead of sending them out to the network where they are lost. That means that while I'm in the home network I can't reach my services over the default domain name. My solution for this is use a VPN to connect to an external VPN service so the loopback is performed externally. Not ideal but the amount of data that goes back and forth is not very large.

Another thing is that my VPN service doesn't work while I'm using the backup solution because the cellular router doesn't have an option to create static routing entries to the IP address range and subnet used by my VPN server for the clients. While I could live without the VPN server for a while as I can also use an external VPN service it limits my ability for remote support when I am not at home as I use my home VPN service as part of that solution when I'm behind a NAT myself and thus not reachable for reverse VNC connections.

So while by and large the backup solution works there are some shortcomings that would take some more tinkering to overcome. But o.k. it's a backup solution so I can live with that for a while. And yes, agreed, this is not something non-techies would set up at their home so it's by no means a solution for the masses.

Raising The Shields – Part 12: Why Do eMail Clients Not Have An Option To Show Certificate Changes?

There we go, as recently reported, my eMail hosters now use Perfect Forward Secrecy (PFS) key negotiation to thwart mass surveillance. There is one more thing I'd like to have though, not from them, but from Mozilla and others working on eMail client programs such as Thunderbird: Warnings when SSL certificates change.

While it's great to have PFS in place there is still the loophole that anyone being able to create a certificate for my eMail hoster's domain on the fly can spy on my email traffic. The only thing that can warn the user of this is if the email client presented a warning when the hoster's certificate changes. I know that's probably nothing for the masses but a little switch in the configuration for those who'd like to have it would be very nice.

On the web browser side I use the 'Certificate Patrol' plugin for the purpose and it's quite interesting to see when and how often certificates change. I'd really like to have something similar for Thunderbird as well!

P.S.: And in case you are wondering about previous 'Raising the Shields' posts, click on the privacy tag below or use this Google search.

New Buildings, Insulation and Coverage Issues

A little rant today on new buildings and coverage issues as I keep hearing such reports with increasing frequencies:

When GSM was launched in the 1990's the windows of most buildings were made of glass and while there was some signal loss through them, by and large things worked pretty well. Over the last couple of years however, new buildings, especially offices, are equipped with heat insulating windows that don't only keep the heat or cold out but also radio waves. Their effect is pretty dramatic. Excellent coverage outside the building, no coverage whatsoever inside the building. Hotels, offices, shopping malls, you name it, it's getting more difficult to get coverage into those buildings from macro cells on the outside. While shopping malls are often equipped with indoor coverage via repeaters or small cells, hotels and office buildings or usually not. An exception I have noticed are 4+ star hotels in Asia while their European counterparts usually don't bother. Sure, there are solutions for this that work great such as repeaters, distributed antenna systems, small cells, femto cells, etc. but they all require active interaction between network operators and the owner of the buildings, i.e. extra work. Extra work many building owners are so far unwilling to do. I wonder how much critical mass it will take in terms of new buildings before network operators are taking a pro-active approach to this!?

Raising the Shields – Part 11: My Email Hoster uses Perfect Forward Secrecy Now

Email certificate infoOne of the few positive outcomes of the ongoing spying scandal is that German email hosters have announced to improve security for email exchanged between them by introducing encryption. In addition, many of them have now upgraded their security for SMTP, POP and IMAP connectivity to their customers as well. When I recently run a trace of the email traffic between me and my provider I was positively surprised to see that they now use TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) as a cipher suite with clients that support it (e.g. Thunderbird in my case on my notebook and K9 email on Android). ECDH stands for Elliptic Curve Diffie Hellman, an algorithm that to generate temporary cipher keys which can't be reconstructed even if the SSL certificate used during session establishment falls into the wrong hands later on. Hence it's called 'Perfect Forward Secrecy'. For details of what this means, have a look at this previous post. While my data is still stored on the server as clear text this at least prevents casual eavesdropping by those analyzing all data that runs through a transmission link. And that suits me just fine!

How A Window Ends Up On The Screen

Back in my College days I had a course on computer graphics and how elements such as windows, buttons, input boxes, etc. etc. end up on the screen (both on the desktop and on mobile from today's perspective) and how they can overlap and disappear behind each other. But that's been some time ago so I was quite glad to have stumbled over a quick refresher on the difference of X and Wayland here. The first part of the post is quite easily understandable for those with a general background of how a desktop is rendered while the second part is quite a deep dive. But even if you don't want to go down that deep the post is still worth reading.

P.S.: And no I don't want to get into the debate of Wayland vs. Mir.