The 30C3 Has Started – Schedule, Streamdumps and Live Network Stats

Yesterday, the 30th Chaos Communication Congress (30C3) has started in Hamburg Germany and here are some interesting links to peek into the event:

A couple of weeks ago I had a post on the 100 Gbit/s link the congress had to the outside world for participants to stay connected during the event. Here's a link to a page that shows some live stats on bandwidth utilization, amount of data transferred, number of Wi-Fi clients, on-site GSM use and more. When I looked in the evening of the first day, bandwidth utilization was around 12% in the uplink and 6% in the downlink. In other words more data got sent out of the congress than in. The percentages might seem low but 12% means a data rate of 12 gigabits per second… Another staggering number is the 4000 Wi-Fi clients using the system.

For those who can't be there in person here's the schedule and a link to a fast proxy of the raw video streamdumps of the sessions for quick downloads right after the talks are over. Live video and audio streams are also available as well as cut and edited versions of each session. Google will help you for those links.

Wi-Fi Monitoring Approach From 2009 Still Works

Four years ago I put down my notes in a blog post on how to use Ubuntu on a notebook to trace Wi-Fi in combination with Wireshark so I could see the Wi-Fi management frames and also the Wi-Fi portion of each Ethernet frame. Amazingly enough, it still works the same way as it did four years ago so I thought I'd write a quick post and link to the original entry as information about how this is done is not widespread. So here's link to the original post.

Tracing this way is a bit more complicated than the approach with the mini-Wi-Fi-Access point that backhauls its traffic via Ethernet as described in this post from back in November. However, this approach is only good for tracing Ethernet frames and gives no insight into the wireless part. But sometimes that is not necessary anyway, it really depends on what one is looking for.

Some Reflections On Why I like The Fairphone

FpBack in November I had a post in which I wrote that my next device would be a Fairphone (which has now started shipping). Not because it has revolutionary new features or because it's especially high end, which it is not, but for a number of other reasons.

First of all, I like the idea that people think about how a smartphone can be produced in a fair(er) way for the people and the environment. Also, I like the fact that it is done by a small company and that they are very open about the way the phone is designed and produced. I can identify with their ideas and their motives and that's another important thing that has been missing for me ever since Nokia threw itself (or was thrown?) into Microsoft's grip of death.

Before continuing on the Fairphone, a quick look back to former times: Back in the 2006+ timeframe I could identify with Nokia devices because it was pretty much 'the' company at the time for me that innovated the most around bringing the Internet to mobile devices. At the time, social media was also a new concept and to me their approach appeared to be honest. Sure it was driven by a marketing department but the whole thing was so novel that it was still possible to get engaged with the people there. This interaction got lost on both sides over time as the original people left and as things just became too main stream.

These days, the Internet on mobile devices has gone mainstream so the issue is solved. Sure, there is still innovation but by and large the Internet is mobile now. I'm not mainstream and so wasn't Nokia when they pushed the idea of Internet on mobile so it's difficult for me to identify with large and anonymous corporations spitting out devices in the tens of millions today.

With the Fairphone, to come back on topic, it's different. The company has faces and although I only know all of them but one from their website it's a much more personal approach. Also, I'm happy that I could contribute a bit to the project, by paying up front in November for one thing and having been part of the testing and bug fixing effort for another.

And last but not least the Fairphone, on the technology side, has some features I don't get in any other device in that combination such as dual-SIM capability in combination with a good screen resolution, fast processor and an almost stock Android with root rights so I can tame what Google is doing. I haven't tried yet but I hope this still works.

Thanks for that, Fairphone, I'm sure it will become even more exciting as the story continues.

Network Testing From A Train Perspective – Something for 2014?

Recently, a German consumer telecoms magazine published their annual network testing results for network operators in Austria, Switzerland and Germany. If you are interested, P3 has a PDF of the article here. Sorry, it's in German only but even if you don't speak the language, the result tables should nevertheless be discernible.

It's good to see that 80% of their drive route in Germany was covered by LTE networks of two carriers and top speeds of beyond 90 Mbit/s were measured. Also it's interesting that network operators now have the tools to minimize call setup times when a fallback from LTE is necessary. While two network operators still have several seconds of additional delay, one network operator has managed to cut that down to a mere 0.2 seconds as I've already noticed myself back in July.

So far, so good. However, when I was recently on a 10 hour train trip through Germany and Austria I was painfully reminded that network coverage along many rail lines is still very far from perfect. This was made even worse by the train not being a high speed train with 2G/3G repeaters inside and unfortunately insulating windows that do not only keep out the heat or cold but also the wireless networks.

So perhaps it would be time to include train trips through the three countries as another testing criteria in such network tests in 2014 to get an idea as a consumer how well different railway lines are covered and what to expect on trains with/without repeaters and insulating windows. Perhaps this would encourage network operators that want to provide quality coverage to do something about the current state of affairs. And trains are usually full of people being bored and wanting to use their mobile devices so it's a strong sales argument!

Bluetooth Revival Part 3 – Rental Car Experience

In my series on my renewed enthusiasm about Bluetooth (see here and here) I can surprisingly add another entry: Even though it is every now and then amusing to listen to advertisement on the radio I do get bored and annoyed after a while. When recently renting a car for a day and driving overland I got to this point quite quickly. But then I noticed that the car was equipped with a Bluetooth interface for music streaming and telephony. The pairing procedure was not for the faint hearted and one should deny the car's request to access the phone's address book but once done I could stream my music from the phone to the on-board audio system – without advertisement interruptions. Excellent! Voice telephony was also integrated in the system as incoming calls were alerted over the car's loudspeakers.

No LTE with a GSM SIM card

This quick post was inspired by a comment the previous blog entry about 3G security. As the comment mentioned, 3G security procedures are just used if the SIM card, which should actually be called a UICC (Universal Integrated Circuit Card) these days, contains a USIM (Universal Subscriber Identity Module), i.e. a folder branch and internal logic for 3G security. For details see here.

As also mentioned in the comment, many network operators allow the use of old 2G SIMs (i.e. UICCs with a GSM SIM folder) in their 3G networks. From the outside, a UICC with a 2G SIM and a UICC with a 3G USIM can't be told apart unless the operator has printed something on the SIM that hints its a 3G SIM card. In practice, it's even worse as many network operators still sell 2G UICCs today, probably because they are a couple of cents cheaper.

But this approach now backfires with LTE. Here, the 3GPP specification explicitly states that 2G UICCs can't be used. And indeed, when a user has a 2G SIM card (which he might just have bought recently) he won't be able to use LTE because either the mobile won't even try or because the network rejects the user. I've given it a try and it really doesn't work.

In other words, those network operators on the cheap side will have to exchange a lot of UICCs in the future when they go live with LTE and their customers with an LTE capable device will be stuck in 3G.

A Bit About AUTN and 3G Security

One major new feature UMTS introduced when it was designed that GSM did not have was mutual authentication instead of only the device authenticating towards the network. This way, man-in-the-middle attacks can be prevented in which an attacker puts a rouge base station in place and tricks a device into using it instead of the real network. So far I always assumed that the Authentication token (AUTN) that was introduced contained all the magic. But 3G security and ciphering is a bit complex so I never dug down deep enough to actually understand how it really works. Lately, I came across the topic again and this time around I investigated a bit more. So here's how man-in-the-middle attacks are prevented in UMTS:

The story starts with the Authentication token (AUTN). This is a new parameter in UMTS that did not exist in GSM and it is computed in the Home Location Register / Authentication Center (HLR / AuC) and on the SIM card. Input parameters are a random number, which is sent during authentication from the network to the mobile device and the secret key that is only stored in the SIM card and in the Authentication Center and never sent anywhere. Another input parameter I was so far not aware about is a sequence number (SQN) that increases over time. When authentications are performed the mobile device only accepts an AUTN that was generated with a higher sequence number than what it has seen before. In practice, things are a bit complicated by the circuit switched and packet switched core network parts having an individual set of precomputed authentication vectors and each side authenticates a mobile device on its own. In other words sequence numbers increase independently on the core and packet switched side and a mechanism is in place in the mobile device to handle this. How sequence numbers are generated and increased is implementation specific but suffice it to say that the number can only increase and not decrease over time.

At this point we have the AUTN and the sequence number (SEQ) that is encoded in the AUTN to prevent replay attacks, i.e. a reuse of potentially intercepted authentication information. The next and equally vital ingredient is integrity checking of signaling messages that are exchanged between the network and the mobile device. Integrity checking is also based on the secret key and ensures that messages are not altered on the fly by an attacker that has managed to insert itself in the transmission chain. At this point an attacker can still passively eavesdrop on the signaling and user data exchange. Therefore the final ingredient is ciphering of signaling messages and user data to prevent this as well.

To quickly summarize: The following things are needed to prevent man-in-the-middle attacks and eavesdropping:

  • An Authentication Token (AUTN) so the mobile knows the Authentication Center trusts the network which performs authentication
  • A Sequence Number (SEQ) embedded in the authentication token to prevent replay attacks
  • Integrity checking so an attacker can't act as a man in the middle
  • Ciphering to prevent passive eavesdropping

For much more details see this paper from adventurous days back in 2001.

 

The Prepaid Wireless Internet Wiki Surfaces Again At WikiFoundry

Back in April 2013 the Prepaid Wireless Internet Wiki I started many years ago suddenly vanished from the cloud. At the time it was hosted by Wetpaint and I found no way to contact them to find out what happened. Bitten by the cloud, yet again… When I recently searched something on the Internet I suddenly rediscovered the Wiki again, this time hosted on WikiFoundry!

It seems the Wetpaint wikis were at some point bought by WikiFoundry and they put the Prepaid Wireless Internet wiki back online. Gee, well thanks for that! It looks like it hasn't been discovered by many, as there haven't been many modifications since then. But my login data was still valid there so I can still (or again?) administer the site. The new 'owner' was also nice enough to provide an export option. Thanks, that's great, just in case this arrangement doesn't last, either.

So there we go, I've put a link to the Wiki back on the blog and hope it will be used as it was in the 'old days'.

How A Base Station Antenna Looks Like On The Inside

Cellular antennas are everywhere to be found on top of buildings these days. Those vertically long white antennas, usually three at a time pointing in different directions. But little is known how they look like on the inside. And there must be quite something in them these days as most of them support several independent frequency ranges and also two polarizations per antenna (horizontal and vertical) for MIMO and RX/TX diversity. I've had a number of posts on this blog on antennas over the years and my two favorites are 'Antenna in Ruins' and 'Antenna Stuff'. But so far I've never seen the inside of one. But recently I stumbled over a picture taken in the German Technical Museum and available on Wikipedia here that shows how it looks inside.

How To Get An SSL Certificate For Your OwnCloud At Home That Runs On A Dynamic IP Address

I've been running an Owncloud instance at home for a while now and it's been revolutionary for me. It allows me to securely share my calendar and address book between several of my devices over the Internet and it lets me share files with friends and associates as easily as over less secure commercial cloud services. The only shortcoming I grew a bit tired of was that I only had a self-signed SSL certificate for my web server. This means that I either had to send http download links to those I wanted to exchange a file with or to tell them to ignore the stern warning message about a non-authenticated certificate when sending them an https link. Both options are not really acceptable in the long run, at least not to me.

The solution, of course is to get my SSL certificate for my Owncloud web server authenticated by a Certificate Authority. This is a bit tricky, though, as I run my Owncloud at home and my DSL line has a dynamic IP address that changes once a day. Therefore I use a dynamic DNS provider and whenever my IP address at home changes, my DSL router at home contacts the dynamic DNS provider and updates the IP address for my domain name. The catch with this approach is that in order to get an SSL certificate one has to be the owner of the domain name. When using a free dynamic DNS service, the servie provider owns the domain name and distributes sub-domains to users. In other words it's not possible with this setup to get an SSL certificate authenticated by a Certificate Authority for a sub-domain of the dynamic DNS provider.

Some dynamic DNS providers offer to register domain names in the name of the customer that can then be used with their dynamic DNS services but this is obviously not free. I didn't shop around for a cheap solution as I am very happy with the reliability of No-IP whom I've used for a long time now with a free account. It works well so I decided to stay. No-IP offers two variants of using one's own domain name with their dynamic service and this is actually a bit confusing: Their "Plus-DNS" package lets you use a domain name that is already registered to your name. This requires that the company that has registered a domain name for you has to allow you to change the DNS entry to point to No-IP. I have a couple of domains I could use for this purpose but unfortunately my provider does not let me change the DNS entries.

Therefore what I really needed was to get a domain name via No-IP and then link that with their "Plus-DNS" package. Note: Whether No-IP is a suitable dynamic DNS provider for you or not depends on whether your DSL or cable router at home lets you configure them for dynamic DNS services so have a look there first. Unfortunately, No-IP doesn't do a very good job of pointing out that the two packages need to be combined so I got it wrong the first time. So here's how it works if it is done in the correct order: Getting a domain name via them costs $15 a year when you start from this link.  But that's only half the deal as later on you also have to select the "Plus-DNS" package to add the dynamic DNS functionality to the domain name. The package altogether is $32 or around €25 per year. The domain name is registered in an instant and usable straight away. Care should be taken that the email address registered for the domain name is real as later on an email is sent to this address during the SSL certificate authentication process.

Once the domain works and points to the IP address dynamically assigned to the home network, everything is in place to create the SSL certificate and get it authenticated. No-IP also offers to do that part but I found the price a bit too high. So I looked around a bit and found Namecheap that resells Comodo SSL certificates for $9 with a validity period of one year. I tried their certificate later on with Firefox, Internet Explorer on the desktop as well as Safari and Opera on mobile and its accepted by all of them. Creating a certificate and then getting it authenticated is quite straight forward once one knows how to do it and I've described the details in this blog post.

Once the Certificate Authority delivers the signed SSL certificate by email the final step is to configure the web server to use it. In my case I use Apache2 for my Owncloud instance and as I have no virtual hosts configured the only configuration file that needs to be changed is /etc/apache2/sites-enabled/default-ssl. Here's the lines that need to be adapted:

#   SSL Engine Switch:
#   Enable/Disable SSL for this virtual host.
SSLEngine on

#   A self-signed (snakeoil) certificate can be created by installing
#   the ssl-cert package. See
#   /usr/share/doc/apache2.2-common/README.Debian.gz for more info.
#   If both key and certificate are stored in the same file, only the
#   SSLCertificateFile directive is needed.

SSLCertificateFile    /etc/ssl/certs/martin.crt
SSLCertificateKeyFile /etc/ssl/private/martin-server.key

#   Server Certificate Chain:
#   Point SSLCertificateChainFile at a file containing the
#   concatenation of PEM encoded CA certificates which form the
#   certificate chain for the server certificate. Alternatively
#   the referenced file can be the same as SSLCertificateFile
#   when the CA certificates are directly appended to the server
#   certificate for convinience.

SSLCertificateChainFile /etc/ssl/certs/martin.ca-bundle

If you've read my post about SSL certificates linked above, the lines that use the .crt and the .key file are easy to understand. I'm not sure if the third parameter, SSLCertificateChainFile, needs to be configured as well as it is only used during client authentication which is only done for special applications and Owncloud is not among them. I configured it to one of the ca-bundle files I received from the Certificate Authority.  That was probably not quite correct as the ca-bundle files should perhaps have been linked together before doing so but as it is not used anyway I don't think it hurts. The third parameter points to the file that contains the certificate chain of the certificate issuer. Like the signed certificate file it is also provided by the certificate authority.

There we go, that's it, for less than €35 a year I have my own domain now for my Owncloud instance at home together with a valid SSL certificate!