Transit and Interconnect Speeds in Germany in 2013

A couple of days ago I reported about the findings of the 2013 German telecommunication regulators report. Among other things, the report contained two numbers: In 2013, mobile networks in Germany carried a total of 267 million gigabytes while fixed line network carried 8 billion gigabytes. The numbers are staggering but what does that mean in terms of transit and interconnect links required, i.e. how much data is flowing to and from the wider Internet into those networks per second?

Let's take the mobile number first, 267 million gigabytes per year. There are four mobile network operators in the country, let's say one of them handles 30% of that traffic. 30% of that traffic is 80 million GB per year and 219.178 GB per day. Divided by 24 to get the traffic per hour and then divided by 60 and again by 60 to get down to the GB per second that's 2.53 GB/s or roughly 20 Gbit/s. This number does not yet include usage variations throughout the day so the peak throughput during the busiest times of the day must be higher.

On the fixed line side, the number is even more staggering. Let's say the incumbent handles 70% of the 8 billion gigabytes of the year (only an assumption, use your own value here) this boils down to a backhaul speed of 1.4 Tbit/s (1400 Gbit/s) plus whatever the peak throughput during busy times during the day is above the average.

I'm impressed!

Owncloud And Selfoss Brain Transplant – Using TAR For A Running Backup

My Owncloud and Selfoss servers at home have become a central service for me over the past year and I have taken care that I can cut-over my services to an alternative access remotely should the DSL line or DSL router fail. For the details see here and here. While I'm pretty well covered for line failures I'd still be dead in the water if the server itself would fail. Agreed, this is unlikely but not unheard of, so I decided to work on a backup strategy for such a scenario as well.

What I need is not only the data that I could restore on a backup server but that backup server must be up and running so that I can quickly turn it into the main server even while not at home should it become necessary. As it's a backup server and slow service is better than no service it doesn't have to be very powerful. Therefore, I decided to use a Raspberry Pi for the purpose on which I've installed Owncloud and Selfoss. Getting the data over from the active servers is actually simpler than I thought by using tar:

To create a copy I think its prudent to halt the web server before creating a copy of the Owncloud data directory as follows:

sudo service apache2 stop
cd /media/oc/owncloud-data

sudo tar cvzf 2014-xx-xx-oc-backup.tar .
sudo service apache2 restart

This creates a complete copy of all data of all users. The tar file can become almost as big as all data stored on Owncloud so there must be as much free disk space left on the server as there is data stored in the Owncloud data folder.

In the second step, the tar archive is copied to the target machine. Scp or sftp do a good job. Once the file has been copied, the tar file should be deleted on the server to free up space.

On the target machine, I stop the web server as well, delete the the owncloud data directory and then unpack the tar archive:

sudo service apache2 stop
cd /media/oc/owncloud-data
sudo rm -rf owncloud-data
sudo mkdir owncloud-data
cd owncloud-data
tar xvzf /path-to-tar/2014-xx-xx-oc-backup.tar
sudo chown -R www-data:www-data /media/oc/owncloud-data
sudo service apache2 start
rm /path-to-tar/2014-xx-xx-oc-backup.tar

And once that is done the backup Owncloud instance runs with the calendar, address book and file data of the standard server.

One important thing to keep in mind when setting this up for the first time is to copy and paste the password salt value from /var/www/owncloud/config/config.php over to the backup server as well as otherwise it's not possible to log into the backup Owncloud instance. And finally, have a close look in the same configuration file on the backup server if the 'trusted domains' parameters match your requirements. For details have a look at the end of this page.

The same principle also works for getting a working copy of the Selfoss RSS server.

Using SSH to Tunnel Packets Back To The Homecloud

In my post from a couple of day ago on how to home-cloud-enable everyone, one of the important building blocks is an SSH tunnel from a server with a public IP address to the Owncloud server at home, i.e. not accessible directly from the Internet. I didn’t go into the details of how this works, promising to do that later. This is what this post is about.

So why is such a tunnel needed in the first place? Well, obviously, technically savvy users can of course configure port forwarding on their DSL or cable router to their Owncloud server but the average user just stares at you when you make the suggestion. Also, many alternative operators today don’t even give you public IP addresses anymore so even the technically savvy users are out of luck. So for a universal solution that will work behind any connection no matter how many NATs are put in the way, a different approach is required.

My solution to the problem is actually pretty simple once you think about it: What NATs and missing public IP addresses do is to prevent incoming traffic that is not the result of an outgoing connection establishment request. To get around this, my solution that I’ve been running from my own network at home for some time now over a cellular connection (!) 24/7 establishes an SSH tunnel from my Owncloud server to a machine on the Internet with a public IP address and tunnels the tcp port used for http (443) from that public machine through the SSH tunnel back home. If you think it’s complicated to set up you are mistaken, a single command is all it takes on the Owncloud server:

nohup sudo ssh -o ServerAliveInterval=60 -o ServerAliveCountMax=2 -p 16999 -N -R 4711:localhost:443 ubuntu@ec2-54-23-112-96.eu-west-1.compute.amazonaws.com &

O.k. it’s a mouthful so here’s what it does: ‘nohup’ ensures that ssh connection stays up even when the shell window is closed. If not given, the ssh task dies when the shell task goes away. ‘sudo’ is required as tcp port 443 used for secure https requires root privileges to forward. The ‘ServerAliveInterval’ and ‘ServerAliveCountMax’ options ensure that a stale ssh tunnel gets removed quickly. The ‘-p 16999’ is there as I moved the ssh daemon on the remote server from port 22 to 16999 as otherwise there are to many automated attempts to ssh into my box from unfriendly bots. Not that that does any harm but it pollutes the logs. The ‘-N’ option suppresses a shell session to be established because I just need the tunnel. The ‘-R’ option is actually the core of the solution as it forwards the https tcp port 443 to the other end. Instead of using the same port on the other end, I chose to use a different one, 4711 in this example. This means that the server is accessible lateron via ‘https://www.mydomain.com:4711/owncloud’.  Next comes the username and url of the remove server. And finally the ‘&’ operator makes the command go to the background so I can close the shell window from which the command is started.

All of this begs the question, of course, which server I used on the Internet to connect to. Any Linux based server will do and there are lots of companies offering virtual servers by the hour. For my tests I chose to go with Amazon’s EC2 service as they offer a small Ubuntu based virtual server for free for a one year trial period. It’s a bit ironic, I am using a virtual server in the public cloud to keep my private data from the very same cloud. But while it is ironic it meets my needs as all data traffic to that server and through the SSH tunnel is encrypted end to end via HTTPS so nothing private ever ends up on that server. Perfect. Setting up an EC2 instance can be done in a couple of minutes if you are familiar with Ubuntu or any other Linux derivative and once done you can SSH into the new virtual instance, import or export the keys you need for the ssh tunnel the command above establishes and to set the firewall rules for that instance so port 16999 for ssh and 4711 for https is opened to the outside world.

And that’s pretty much it, there’s not even additional software that needs to be installed on the EC2 instance.

Raising the Shields Part 14: Setting Up An OpenVPN Server With A Raspberry Pi

After all the mess with Heartbleed a few weeks ago and updating my servers I started thinking about the current state of security of my VPN gateway at home. So far, I've used a very old but rock solid Linksys WRT-54G with DD-WRT on it for providing Wi-Fi at home and VPN server functionality for when I'm roaming. But the latest DD-WRT version for that hardware is several years old and was fortunately made before the Heartbleed bug was introduced. So I was safe on that side. But for such a vital and security sensitive function I don't think it's a good idea to run such old software. So I decided to do something about it and started to look into how to set up an OpenVPN server on a platform that receives regular software updates. And nothing is better and cheaper for that than a Raspberry Pi.

Fortunately I always have a spare at home and after some trial and error I found these very good step-by-step introductions of how to setup up OpenVPN on a Raspberry Pi over at ReadWrite.com. If you take the advice seriously to type in the commands rather than to copy/paste them it's almost a no-brainer to do if you know your way around a bit in the Linux domain.

The second part of the instructions deals with setting up the client side on MacOS and Windows. Perhaps the OPVN configuration file is also usable on a Linux system but I decided to configure the OpenVPN client built into the Ubuntu network manager manually. The four screenshots below show how that is done. As some networks have trouble forwarding the VPN tunnel with a maximum packet size (MTU) of 1500 bytes I chose to limit packet size to 1200 bytes as you can see in the second screenshot.

Vpn-client1 Vpn-client2 Vpn-client3 Vpn-client4Another thing that made the whole exercise worthwhile is that I have understood a lot better how OpenVPN uses the client and server certificates. I always assumed that it was enough to just remove a client certificate from the server to disallow a client establishing a VPN connection. Turns out that this is not correct. The OpenVPN server actually doesn't need the client certificate at all as it only looks if a certificate supplied by the client during connection establishment was signed with the certificate of the the certificate authority that is set up on the server as part of the OpenVPN installation. That was surprising. Therefore, revoking access rights to a client means that the client certificate has to be put on a local certificate revocation list the server checks before proceeding with the connection establishment. I'll have a follow up post to describe how that is done.

A final thought today is on processor load: My VDSL uplink at home limits my tunnel to a maximum speed of 5 Mbit/s. A download at this speed takes around 50% processor capacity on the Raspberry Pi independent of whether I'm using the NATed approach as decribed in the tutorial or simple routing betweeen the VPN tunnel and my network at home. At the moment I don't care as a faster backhaul is not in sight. But at some point I might need to revisit that.

Protonet: The Cloud In The Company

Ever since I set up an Owncloud server at home, I've made my peace with cloud services, as my home has become my cloud and my data is where it belongs: With me, and with me only. While my personal needs for cloud services mainly focus on file sharing and calendar/address book synchronization there is a much wider spectrum of cloud applications that companies could benefit from such as secure communication with customers, project management and team interaction. And this is where Protonet comes in with their small on-site servers and cloud applications. I seldom plug companies and products on this blog but in this case I felt I had to make an exception. Even though their software doesn't seem to be open source their product makes the Internet and private data again what I think they should be: Decentralized and in the hands of those who own it. Best of luck to this startup, I wish you raving success! (via Heise news)

This is Protonet! from Protonet on Vimeo.

A Quick Book Review: Rogue Code by Mark Russinovich

Back in October 2012 I wrote a raving review about ‘Trojan Horse’, a fictional cyber-crime story by Mark Russinovich, one of the very few writers who gets the technical details right. A lot has happened since then and living in a post-Snowden era I was a bit skeptical if I could still enjoy a cyber-crime / cyber-espionage novel. Fortunately, Mark’s latest novel, Rogue Code, is more about cyber-crime than about cyber-espionage as the plot focuses on a heist to steal money from the New York Stock Exchange. Being an engineer I wasn’t really sure if I would enjoy a novel about the NYSE so my skepticism continued. But then I really liked his previous two novels so I decided to go for it. And I wasn’t disappointed! The plot starts a bit slow perhaps but I was quickly hooked again and finally had to read the second half of the novel in one piece. As in his previous books he accurately describes the technology bits and does it in a way that geeks know exactly what he’s talking about while less geeky readers will not be turned-off. Kudos to Mark, Rogue Code is another great book and I’m looking forward to the next one!

How Much More Will UMTS Evolve In The Field Now That LTE Is Here?

Two years ago I would not have speculated about UMTS being abandoned anytime soon. LTE was still relatively new and in Europe, UMTS was the main air interface technology for high speed mobile Internet access. But things have changed rapidly and LTE has taken over that role by storm. Today, most high-end and even mid-range smartphones come with LTE support and in many countries, LTE is deployed in a large coverage area and with bandwidths far surpassing UMTS. UMTS might still have the coverage edge for now but in some countries LTE is available in many rural areas where UMTS is not due to it's use of the 800 MHz digital dividend frequency band. All of this makes me wonder if the many new UMTS features that were standardized in 3GPP Release 8 to 12 will ever be used in live networks!?

Those that I can think of that would provide some value for example are a faster high speed uplink beyond the two to four Mbit/s that can be observed today, putting the slow FACH on a common E-DCH and lots of other small enhancements on the protocol layer to make the system more efficient and to have more users active per base station simultaneously. But with LTE going strong I have my doubts if the time and effort will be spent. After all, network traffic shifts towards LTE and UMTS is rapidly becoming the fallback solution in the areas where LTE is not (yet) available and of course for simultaneous voice and data during CS-Fallback while VoLTE keeps having teething problems.

So I wouldn't be surprised if in another two years LTE coverage might have reached the same coverage level as UMTS and the GSM/UMTS-only legacy device pool has shrunken considerably. Perhaps even VoLTE has made it into the real world by then and can really serve as a circuit switched voice replacement. But VoLTE has been in the making for so long, so I'd say this is the big unknown but a must have for any strategy to ramp down UMTS, as falling back to GSM for a voice call and then not having Internet access during the call is just out of the question.

Pushing Owncloud Filesize Limits Beyond a Gigabyte

It doesn't happen very often but every now and then I need to upload a really big file to my Owncloud server. And by really big I mean > 1 GB. That's a bit of an issue as the default Owncloud settings limit the file size to around 500 MB. In the past I tried a couple of times to increase the file size limitation with mixed success. When I recently had a bit more time on my hands, however, I investigated a bit further and managed to push the file size limit on my NUC based Owncloud beyond the biggest file I had at hand, 1.6 GB. So here are the values I changed:

In /var/www/owncloud/.htaccess:

php_value upload_max_filesize 3000M
php_value post_max_size 3000M
php_value memory_limit 3000M

In /etc/php5/apache2/php.ini:

max_input_time = 360   –> default value was 60 seconds which is too short to upload large files
post_max_size = 3000M  (might be overridden by the value in .htaccess)
upload_max_filesize = 3000M (might be overridden by the value in .htaccess)

And that's it, after a 'sudo service apache2 restart' large file uploads work as they should. For further details see this post on the Owncloud forum.

 

Raising the Shields Part 13: Secure Remote VNC Desktop with a Raspberry Pi SSH Bridge

I do a lot of remote computing support for my family members and so far used VNC remote screen viewing over an unencrypted connection for the purpose. This is obviously far from perfect from a security point of view but until recently I didn't find anything that is more secure, as simple to use and that doesn't require a third party service that probably decrypts the session in the middle as well. After my recent exploration of ssh (see my posts on ssh login certificates and ssh SOCKS proxying) I thought of a solution with ssh to protect my VNC sessions as well.

Another shortcoming of my old VNC solution was that changing locations of me and the supported parties required reconfiguration of the DSL router at home. Sometimes I am at home behind my home NAT while the other party is behind another NAT. At other times the person to be supported is behind the home NAT and I'm on the other side of the planet. And finally, there are times when both parties are not at home and there still needs to be a way to get connected without using a third party service in the middle. In the past, I've figured out different approaches to do this, such as the VNC server establishing a connection to the client in some scenarios, the VNC client contacting the server in others and by reconfiguring the router at home. Yes, that was complicated.

The solution I have now found fixes both issues and works as follows: To be location independent there needs to be one secure anchor point that is reachable from home and also when one or both parties are not at home and behind other NATs. This secure anchor point is a Raspberry Pi in my home network to which both parties can establish an ssh tunnel through a port that is forwarded from my ADSL router to the Pi.

The side that wants to export the screen establishes the ssh tunnel with the following command that forwards the VNCs server port (TCP 5900) to which client viewers can connect over the ssh tunnel to the Raspbery Pi. On an PC running Ubuntu the commands for this look as follows (for the Windows /Putty version have a look here:

x11vnc -localhost -usepw -forever -display :0
ssh -N -R 17934:localhost:5900 -p 9212 username@domain.com

The first command launches the vnc server and the '-localhost' ensures the server port is only accessible to applications running on the PC and not to the outside world. The ssh command that follows uses the '-N' option in order not to open a remote shell window and the -R option to forward the local server port 5900 to port 17934 on the Raspberry Pi. The '-p 9212' option is used to instruct the ssh client to use tcp port 9212 to connect to the Raspbery Pi instead of the default port 22 for ssh. While this doesn't add a lot of security it at least keeps the authentication log clean as that port is not found by automated bots looking for vulnerable ssh servers on port 22. The final parameter is the username and the domain name of my home network connection and a dynamic DNS service keeps that updated with the IP address that changes once a day. One thing that comes quite handy at this point is that I use certificates for ssh authentication rather than passwords (see my post here), so no password needs to be typed in.

On the side that wants to view the screen, an ssh tunnel is established with a slightly different ssh command that pulls port 17934 from the Raspberry Pi to the same local port number. Notice the use of the '-L' option compared to the the '-R' option as this tunnel does exactly the opposite:

ssh -N -L 17934:localhost:17934 -p 9212 username@domain.com

And that's pretty much it. Once both tunnels are in place any VNC viewer such as Remmina can be used to connect to the VNC server over the two ssh tunnels. Remmina even has the capability to establish the ssh tunnel as part of a connection profile. A nice side effect is that there is no order in which the two ssh tunnels have to be established. A Raspberry Pi, forwarding a TCP port on the home router and 3 shell commands is all it takes. Quite amazing.

One shortcomming that 3 shell commands approach has is that this solution is only suitable for supporting trused relatives aand friends as the ssh tunnel also gives the supported party access to a command shell on the Raspberry Pi. This can be fixed with a little extra time and effort as described here.

(P.S. And in case you wonder about part 1 – 12 of 'Raising the Shields' have a look here)

Under The Hood: NAS Signaling for AMR-WB

If you are in the business of analyzing UMTS network traces for development, debugging or operational reasons and ever wondered how to find out if a speech call was established as wideband or narrowband or if the codec was changed at some point during the call, I've got an interesting tip for you today that I was recently shown myself: Have a look at the

nas-Synchronisation-Indicator

information element in the RadioBearerSetup or RadioBearerReconfiguration messages of a call. It's quite at the top of the message in RabInfo section so it can be found quickly and the 4 bits of the IE describe the speech codec the MSC wants the mobile device to use for the call:

Narrowband: 1010 –> UMTS AMR-WB (see 3GPP 26.103, Table 4.2)
Wideband:     0110 –> UMTS AMR2 (i.e. narrowband)

Apart from the table in 3GPP 36.103, TS 24.008 gives some more details:

The ME shall activate the codec type received in the NAS Synchronisation Indicator IE.

The NAS Synchronisation Indicator IE shall be coded as the 4 least significant bits of the selected codec type (CoID) defined in 3GPP TS 26.103 [83], subclause 6.3.