How To Get An SSL Certificate For Your OwnCloud At Home That Runs On A Dynamic IP Address

I've been running an Owncloud instance at home for a while now and it's been revolutionary for me. It allows me to securely share my calendar and address book between several of my devices over the Internet and it lets me share files with friends and associates as easily as over less secure commercial cloud services. The only shortcoming I grew a bit tired of was that I only had a self-signed SSL certificate for my web server. This means that I either had to send http download links to those I wanted to exchange a file with or to tell them to ignore the stern warning message about a non-authenticated certificate when sending them an https link. Both options are not really acceptable in the long run, at least not to me.

The solution, of course is to get my SSL certificate for my Owncloud web server authenticated by a Certificate Authority. This is a bit tricky, though, as I run my Owncloud at home and my DSL line has a dynamic IP address that changes once a day. Therefore I use a dynamic DNS provider and whenever my IP address at home changes, my DSL router at home contacts the dynamic DNS provider and updates the IP address for my domain name. The catch with this approach is that in order to get an SSL certificate one has to be the owner of the domain name. When using a free dynamic DNS service, the servie provider owns the domain name and distributes sub-domains to users. In other words it's not possible with this setup to get an SSL certificate authenticated by a Certificate Authority for a sub-domain of the dynamic DNS provider.

Some dynamic DNS providers offer to register domain names in the name of the customer that can then be used with their dynamic DNS services but this is obviously not free. I didn't shop around for a cheap solution as I am very happy with the reliability of No-IP whom I've used for a long time now with a free account. It works well so I decided to stay. No-IP offers two variants of using one's own domain name with their dynamic service and this is actually a bit confusing: Their "Plus-DNS" package lets you use a domain name that is already registered to your name. This requires that the company that has registered a domain name for you has to allow you to change the DNS entry to point to No-IP. I have a couple of domains I could use for this purpose but unfortunately my provider does not let me change the DNS entries.

Therefore what I really needed was to get a domain name via No-IP and then link that with their "Plus-DNS" package. Note: Whether No-IP is a suitable dynamic DNS provider for you or not depends on whether your DSL or cable router at home lets you configure them for dynamic DNS services so have a look there first. Unfortunately, No-IP doesn't do a very good job of pointing out that the two packages need to be combined so I got it wrong the first time. So here's how it works if it is done in the correct order: Getting a domain name via them costs $15 a year when you start from this link.  But that's only half the deal as later on you also have to select the "Plus-DNS" package to add the dynamic DNS functionality to the domain name. The package altogether is $32 or around €25 per year. The domain name is registered in an instant and usable straight away. Care should be taken that the email address registered for the domain name is real as later on an email is sent to this address during the SSL certificate authentication process.

Once the domain works and points to the IP address dynamically assigned to the home network, everything is in place to create the SSL certificate and get it authenticated. No-IP also offers to do that part but I found the price a bit too high. So I looked around a bit and found Namecheap that resells Comodo SSL certificates for $9 with a validity period of one year. I tried their certificate later on with Firefox, Internet Explorer on the desktop as well as Safari and Opera on mobile and its accepted by all of them. Creating a certificate and then getting it authenticated is quite straight forward once one knows how to do it and I've described the details in this blog post.

Once the Certificate Authority delivers the signed SSL certificate by email the final step is to configure the web server to use it. In my case I use Apache2 for my Owncloud instance and as I have no virtual hosts configured the only configuration file that needs to be changed is /etc/apache2/sites-enabled/default-ssl. Here's the lines that need to be adapted:

#   SSL Engine Switch:
#   Enable/Disable SSL for this virtual host.
SSLEngine on

#   A self-signed (snakeoil) certificate can be created by installing
#   the ssl-cert package. See
#   /usr/share/doc/apache2.2-common/README.Debian.gz for more info.
#   If both key and certificate are stored in the same file, only the
#   SSLCertificateFile directive is needed.

SSLCertificateFile    /etc/ssl/certs/martin.crt
SSLCertificateKeyFile /etc/ssl/private/martin-server.key

#   Server Certificate Chain:
#   Point SSLCertificateChainFile at a file containing the
#   concatenation of PEM encoded CA certificates which form the
#   certificate chain for the server certificate. Alternatively
#   the referenced file can be the same as SSLCertificateFile
#   when the CA certificates are directly appended to the server
#   certificate for convinience.

SSLCertificateChainFile /etc/ssl/certs/martin.ca-bundle

If you've read my post about SSL certificates linked above, the lines that use the .crt and the .key file are easy to understand. I'm not sure if the third parameter, SSLCertificateChainFile, needs to be configured as well as it is only used during client authentication which is only done for special applications and Owncloud is not among them. I configured it to one of the ca-bundle files I received from the Certificate Authority.  That was probably not quite correct as the ca-bundle files should perhaps have been linked together before doing so but as it is not used anyway I don't think it hurts. The third parameter points to the file that contains the certificate chain of the certificate issuer. Like the signed certificate file it is also provided by the certificate authority.

There we go, that's it, for less than €35 a year I have my own domain now for my Owncloud instance at home together with a valid SSL certificate!

Still No UMTS and LTE in the Paris Metro

One and a half years ago I wrote a blog post about the growing pains of taking the Paris metro and accessing the Internet over the 2G network that just couldn't absorb the load anymore. At the time I noted that there were talks between the metro and one of the French network operators to deploy 3G and LTE in the metro. Sadly enough it still hasn't happened one and a half years later and the 2G network now just fails completely for Internet access. A sad state of affairs. How long do I have to wait before coming back and being positively surprised?

But to end this post with a positive note I'd like to add that outside the metro, using 3G has become a lot simpler from an international roaming point of view now because European roaming data rates of my home network operator have reached a level where day to day web browsing on the mobile and some data from the notebook is affordable enough so I don't have to ration things quite that strictly anymore. Good!

100 Gigabit/s Ethernet Backhaul At The Upcoming CCC Conference

… yes you read right, the upcoming Chaos Communication Congress will have a 100 Gbit/s Ethernet backhaul. When I first read it in the press I had a hard time to believe it but here's the original blog post on the CCC's web site (and they know what they are talking about…)

Last year's congress was attended by 6000 participants. If you divide one value by the other that's 16 Mbit/s per participant if everybody suddenly decided to download something at the same time. As this will unlikely be the case during any moment during the conference you can imagine what kind of connectivity experience one will have there. Unfortunately I've never been able to adapt to their timing. Next year perhaps.

Let's be a bit crazy and compare the 100 Gigabit/s link to, let's say the aggregate throughput of Vodafone Germany on new year's eve 2011 which I calculated was 7.9 Gbit/s. And the fixed line interconnect traffic of the German incumbent the same day peaked at 1.800 Gbit/s as reported here.

100 Gbit/s for 6000 congress participants. Sounds like a very very fat pipe indeed!

TCP, Fragmentation and How The MTU Controls The MSS…

Seldom but from time to time I encounter networks that my VPN struggles with. Sometimes the VPN tunnel is established just fine, pings are going through the tunnel but web browsing and other download activities just don't work. The effect is caused by fragmentation, i.e. the IP packet size of the downlink is too large for some part of the network between me and the server and hence the IP packet is split somewhere or simply discarded because it has become too long.

The remedy for such behavior is to reduce the Maximum Transfer Unit (MTU) for the tunnel interface on my computer to a lower values such as 1200 bytes and things come back to life. What I always wondered, though but never had the time to figure out was how the server is notified of the reduced MTU!?

Screenshot from 2013-11-20 21:09:06When I recently encountered the scenario again I had a closer look at the TCP connection establishment and found that the MTU size is contained in the first IP SYNchronize packet in the TCP header. The parameter that conveys the maximum size a TCP packet can have is contained in the Maximum Segment Size (MSS) parameter. The first image on the left shows the default MSS over my Ethernet at home, 1460 bytes. Together with the additional IP overhead the IP packet size is 1506 bytes. The MTU size configured on my Ethernet interface is 1500 bytes.

Screenshot from 2013-11-20 21:10:15When I change the MTU size on the fly on my Linux machine with 'sudo ifconfig eth1 mtu 800', my MTU size shown by 'ifconfig' becomes 800. The MSS size then becomes 760 bytes and the Ethernet packet is 814 bytes long. The 14 extra bytes are for the Ethernet header that is not counted in the maximum MTU size because the Ethernet header is discarded at the next router and replaced by another Ethernet header or some other protocol if the next hop is over a different network technology.

There we go, another mystery solved.

Mouse – Keyboard – Wifi – A Layer 1 Trace

Mouse - keyboard - wifiOver the years I've used Metageek's WiSpy USB tracer a lot to figure out what is going on in the 2.4 GHz Wi-Fi band. When I was recently investigating a slow Wi-Fi which I ultimately nailed to a runaway Wi-Fi card I also picked up the signals of my wireless mouse and keyboard alongside my own Wi-Fi signal. The image on the left shows the three signals. The mouse transmitted near channel 2, the keyboard near channel 7 and the Wi-Fi center frequency is on channel 11. The green dots in the lower part of the image even show when I used the mouse and when I used the keyboard. The Wi-Fi was pretty dormant during the trace and the image was only created by the beacon frames of the Wi-Fi access point.

Is It Ethical For A Nation To Infect 50.000 Computers With Digital Sleeper Agents?

Over the past days we've heard in the media that the NSA has infected at least 50.000 computers worldwide with digital sleeper agent software, as Techcrunch puts it. Obviously this has created a lot of outrage across the industry and also in the non-technical media. But despite all the outrage nobody really commented that actively infecting computers is by an order of magnitude worse from an ethical point of view than anything we have heard about the NSA's doings in recent months.

Listening passively on transmission links and harvesting data is one thing (which is already bad enough by itself), but infecting 50.000 computers with spyware is quite quite another. And I wonder who those 50.000 computers belong to!? Did the NSA really find that many terrorists out there? Somehow I doubt it. As if it isn't already bad enough that companies and individuals have to fight criminals trying to infect their PCs with malware that do all sorts of things like stealing passwords, extorting money, and so on. No, now we also have to defend ourselves against nation states doing similar things on a similar scale!?

It makes me wonder when this will go from accusation to proof? What it would take is the code or the executable of the malware and a link back to it's origin. With that in hand it wouldn't take long to actually find the malware in practice (unless all copies destroy themselves without leaving a trace). And then imagine the malware is found on computers of governments and private companies around the world. This is the point when the abstract becomes personalized. And when you look at what happened when the German Chancelor found out her phone calls were listened to you get an idea what is likely to happen in this case. Is it really possible to cover up 50.000 infections?

It really depresses me that a nation goes that far… And while we are at it: What makes us think it is only one nation who thinks it's a good idea to do such things?

Does a Certificate Authority See Your Private Key?

One of the questions I had for a long time now is if a Certificate Authority sees the private key of an SSL certificate for a web server during the certification process? If so, it would be quite a security issue.

Before answering the question, it's probably a good idea to quickly have a look at what an SSL certificate and a certificate authority is and what they are needed for:

The first purpose of an SSL certificate is for it to be sent from a web server to a web browser whenever the https (http secure) protocol is used to establish an encrypted connection. The SSL certificate contains the public key of the web server that is used to generate a session encryption key. The basic idea behind this approach is that anything that is encrypted with the public key can only be decrypted again with a private key that only the web server knows, i.e. it is never transmitted to the web browser. In other words an attacker that eavesdrops on the communication can't use the private key to decrypt the packets he sees passing through.

The second purpose of an SSL certificate is to contain information for the web browser so that he can validate that the connection is really established to the web site the user wants to visit and is not to a malicious other site to which he was redirected by a potential attacker. To achieve this, an SSL certificate has to be signed by a Certificate Authority that vouches for the validity of the certificate. Signing a certificate is done once and requires that the person or company requesting validation from a Certificate Authority can prove that it is the owner of the domain name in the certificate. This are several ways to do this and I'll describe that in a separate blog post. Once the Certificate Authority has established that the person or company requesting the certificate is the rightful owner of the domain name it generates a public certificate from the information supplied by the requester in a certificate signing request (CSR) which contains, among other things, the domain name (e.g. www.wirelessmoves.com) and the public key to be used.

The important point is that the Certificate Authority does not generate any keys, it just uses the information supplied by the person or company contained in the certificate signing request, signs that with its own key and returns the result. And here's the crucial point: Does the information that is sent to the Certificate Authority also contain the private key that is later on used on the web server side? Would that be the case then the certificate authority would have information that, when obtained legally or illegally, would allow decrypting intercepted data packets.

To answer this question I recently tried it out myself when I needed to get an SSL certificate for my home cloud. And here's how that works: On my server at home I generate a certificate signing request with the following Unix command:

openssl req -new -newkey rsa:2048 -nodes -keyout m-server.key -out server.csr

Before the command generates an output it requests additional information from the user such as the domain name (e.g. www.wirelessmoves.com), company name, location and so on. Once this information is given, the command generates two files : The .csr file which is the signing request and the .key file which is the private key. The next step is to select a certificate authority, which I will again describe in a separate post, and then copy/paste the content of the .csr file into a text box on the web page during the validation process. The public key in the .key file, however, NEVER leaves the server.

And just to make really sure that the private key is not part of the .csr file sent to the Certificate Authority, one can decode the contents of the signing request as follows:

openssl req -in m-server.csr -noout -text

This results in the following output:

Certificate Request:
    Data:
        Version: 0 (0x0)
        Subject: C=DE, ST=Ger, L=Cologne, O=wlx, CN=www.m-test.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:a5:5b:b8:8c:11:2e:cc:48:f9:a6:4c:ed:e6:52:
                    08:58:77:3c:44:a4:78:9f:7c:51:75:79:07:f4:7b:
                    […]
                    0a:4b:1f:bf:b9:90:7d:f8:72:01:50:bc:62:47:8d:
                    be:2e:9e:71:e9:0c:80:56:77:7d:27:05:1b:da:3d:
                    87:d9
                Exponent: 65537 (0x10001)
        Attributes:
            a0:00
    Signature Algorithm: sha1WithRSAEncryption
         73:7c:76:fa:74:b2:34:be:c1:36:9b:aa:06:51:25:e9:f9:df:
         43:0c:9a:a9:75:28:8e:5f:41:f0:30:da:7b:aa:29:90:ea:39:
         […]
         3e:63:d9:1c:e7:65:24:32:c6:05:da:47:10:fd:e9:00:29:ed:
         76:54:54:27:c6:ff:f4:e3:c5:e8:74:1c:dd:29:d0:18:b2:09:
         bd:4c:23:86

There we go, so here's the proof that the Certificate Authority never sees the private key and hence your communication is save from eavesdropping except from those that can steal the Certificate Authority's signature key or can set up a Certificate Authority trusted by web browsers (which I am sure quite a number of three letter agencies have already done) and stage a man in the middle attack. There are ways to protect yourself against that as well, e.g. by using the Certificate Patrol Firefox plugin. But that's another story I've already blogged about here.

My Smartphone Contacts The Network 10 Times Per Hour When Its Idle

One train of thought I followed with my easy smartphone Wi-Fi tracing setup I wrote about recently is how often a typical smartphone contacts the network per hour even if it is not used and just lies on the table and what impact that has on the cellular network in a larger context. Even though I monitored the devices behavior over Wi-Fi the result can be applied to cellular as well as it is likely that most applications do not make a difference anymore between Wi-Fi and cellular connectivity. The result is quite interesting:

Even without user interaction my smartphone contacts the network 10 times per hour. Out of these, 4 times is for checking email. Another 4 times Android calls home to Google, mainly using a Google Talk domain, even though I've disabled the app. Less frequently a DNS query and subsequent traffic can be observed to a number of additional Google domains. I feel quite observed by such unwanted behavior but there's something that can be done about it with a rooted phone as I've described here in the past. Further connections are made for various other purposes. My calendar and address book are synchronized with my Owncloud server at home every four hours, the NTP server is queried to keep the clock in synch, crash reports are sent to crashlytics.com (have I consented to this?), the weather app requests updates, the GPS requests ephemeris data periodically, etc.

So what does this mean on a larger scale? Let's say a network operators has 15.000 3G base stations (extrapolated from here) and 10 million smartphones. If those smartphones were evenly distributed across all base stations there would be around 660 smartphones per base station or around 220 smartphones per sector. If each smartphone connected to the network 10 times an hour that's 2200 requests an hour per sector. If the connection is held for 10 seconds on average, that's 2200 requests / (60 minutes * 60 seconds / 10) = 6 concurrent connections just for the background traffic of the devices.

Some cells are obviously busier than others so some cells probably see two or three times this number, i.e. 15-20 concurrent connections just for background traffic. As the number of concurrent users in 3G cells is likely to be less than a three digit figure that's quite a sizable percentage. And the 10 connections per hour is perhaps even a conservative number as many subscribers use instant messengers that need to send frequent TCP keep-alive packets so they don't loose connectivity to the server. On the other side, many smartphones are used over Wi-Fi, especially when people are at home, which is likely to significantly reduce background traffic over cellular in residential areas. Not so in business areas, however.

So where do we go from here? One good thing is that LTE networks are mostly in place now and many new smartphones, especially those of heavy users are LTE capable by now. That significantly reduces the load on 3G networks. And from what I hear the number of simultaneous users in an LTE cell can be much higher than in a 3G cell. The right technology at the right time.

The GSM Logo: The Mystery of the 4 Dots Solved

Gsm-logo-on-phoneA few weeks ago I asked the question here if anyone knew what the 4 dots in the GSM logo actually stood for. A few people contacted me with good suggestions what the dots could stand for, which was quite entertaining, but nobody really knew. On the more serious side, however, a few people gave me interesting hints that finally led me to the answer:

On gsm-history.org , a website of Friedhelm Hillebrand & Partners, and article is published that was written Yngve Zetterstrom. Yngve's been the rapporteur of the Maketing and Planning (MP) group of the MoU (Memorandum of Understanding group, later to become the GSM Association (GSMA)) in 1989, the year in which the logo was created. The article contains intersting background information on how the logo was created but it did not contain any details on the 4 dots. After some further digging I found Yngve on Linkedin and contacted him. And here's what he had to say to solve the mystery:

"[The dots symbolize] three [clients] in the home network and one roaming client."

There you go, an answer from the prime source!

It might be a surprising answer but from a 1980's point of view it makes perfect sense to put an abstract representation for the GSM roaming capabilities into the logo. In the 1980's, Europe's telecommunication systems were well protected national monopolies and there was no interoperability of wireless systems beyond country borders, save for an exception in the Nordic countries, who had deployed the analogue NMT system and who's subscribers could roam to neighboring countries. But international roaming on a European and later global level was a novel and breakthrough technical feature and idea in the heads of the people who created GSM at the time. It radically ended an era in which people had to remove the telephone equipment installed in their car's trunks (few could afford it obviously) if they wanted to go abroad, or to alternatively seal the system or to sign a declaration that they would not use their wireless equipment after crossing a border. Taking a mobile phone in your pocket over a border and use it hundreds or thousands of kilometers away from one's one home country was a concept few could have imagined then. And that was only 30 years ago…

P.S.: The phone in the image with the GSM logo on it is one of the very first GSM phones from back in 1992.

Tracing Smartphone Network Interaction over Wi-Fi

Smartphone-trace-setupOver the years I've come up wit a number of ways to trace the network traffic from and two a smartphone for various purposes. So far they all had in common that the setup took some time, effort and in some cases bulk hardware. So in quite a number of cases I shied away from taking a trace as the setup just took to long. But now I've come up with a hardware solution for Wi-Fi tracing that isn't bulky and set-up in 60 seconds.

Earlier this year I bought an Edimax USB powered Wi-Fi mini access point that I have since used many times to distribute hotel and office Wi-Fi networks to my devices. Apart from being small it's easy to configure and ready in less than a minute after being plugged into the USB port for power. To trace the exchange of data with a smartphone it only needs to be connected via Ethernet to the Ethernet port of my notebook that is connected to the Internet via another network interface, e.g. its own Wi-Fi card. In addition, the Internet sharing has to be activated for the Ethernet port of the PC. This is supported in Windows and also in Ubuntu in the network configuration settings.

Once done, Wireshark can be used to monitor all traffic over the Ethernet interface. If the smartphone is the only device served by the mini access point, only its traffic traverses the Ethernet interface and from there the Wi-Fi while the notebook's traffic goes directly to the notebook's Wi-Fi adapter. That means no special filtering of any sort is required to isolate data flowing to and from the smartphone. The figure on the left shows the setup. Super easy and super quick to setup.