Using SSH to Tunnel Packets Back To The Homecloud

In my post from a couple of day ago on how to home-cloud-enable everyone, one of the important building blocks is an SSH tunnel from a server with a public IP address to the Owncloud server at home, i.e. not accessible directly from the Internet. I didn’t go into the details of how this works, promising to do that later. This is what this post is about.

So why is such a tunnel needed in the first place? Well, obviously, technically savvy users can of course configure port forwarding on their DSL or cable router to their Owncloud server but the average user just stares at you when you make the suggestion. Also, many alternative operators today don’t even give you public IP addresses anymore so even the technically savvy users are out of luck. So for a universal solution that will work behind any connection no matter how many NATs are put in the way, a different approach is required.

My solution to the problem is actually pretty simple once you think about it: What NATs and missing public IP addresses do is to prevent incoming traffic that is not the result of an outgoing connection establishment request. To get around this, my solution that I’ve been running from my own network at home for some time now over a cellular connection (!) 24/7 establishes an SSH tunnel from my Owncloud server to a machine on the Internet with a public IP address and tunnels the tcp port used for http (443) from that public machine through the SSH tunnel back home. If you think it’s complicated to set up you are mistaken, a single command is all it takes on the Owncloud server:

nohup sudo ssh -o ServerAliveInterval=60 -o ServerAliveCountMax=2 -p 16999 -N -R 4711:localhost:443 &

O.k. it’s a mouthful so here’s what it does: ‘nohup’ ensures that ssh connection stays up even when the shell window is closed. If not given, the ssh task dies when the shell task goes away. ‘sudo’ is required as tcp port 443 used for secure https requires root privileges to forward. The ‘ServerAliveInterval’ and ‘ServerAliveCountMax’ options ensure that a stale ssh tunnel gets removed quickly. The ‘-p 16999’ is there as I moved the ssh daemon on the remote server from port 22 to 16999 as otherwise there are to many automated attempts to ssh into my box from unfriendly bots. Not that that does any harm but it pollutes the logs. The ‘-N’ option suppresses a shell session to be established because I just need the tunnel. The ‘-R’ option is actually the core of the solution as it forwards the https tcp port 443 to the other end. Instead of using the same port on the other end, I chose to use a different one, 4711 in this example. This means that the server is accessible lateron via ‘’.  Next comes the username and url of the remove server. And finally the ‘&’ operator makes the command go to the background so I can close the shell window from which the command is started.

All of this begs the question, of course, which server I used on the Internet to connect to. Any Linux based server will do and there are lots of companies offering virtual servers by the hour. For my tests I chose to go with Amazon’s EC2 service as they offer a small Ubuntu based virtual server for free for a one year trial period. It’s a bit ironic, I am using a virtual server in the public cloud to keep my private data from the very same cloud. But while it is ironic it meets my needs as all data traffic to that server and through the SSH tunnel is encrypted end to end via HTTPS so nothing private ever ends up on that server. Perfect. Setting up an EC2 instance can be done in a couple of minutes if you are familiar with Ubuntu or any other Linux derivative and once done you can SSH into the new virtual instance, import or export the keys you need for the ssh tunnel the command above establishes and to set the firewall rules for that instance so port 16999 for ssh and 4711 for https is opened to the outside world.

And that’s pretty much it, there’s not even additional software that needs to be installed on the EC2 instance.