Using a Raspi as a SSH SOCKS Proxy VPN Server With Firefox

Back in March this year I had a post in which I described a backup plan to access my cloud services at home over the cellular network in case of a DSL line failure. The core of the solution is a Raspberry Pi sitting behind a cellular router waiting for an incoming ssh connection over which I can then access my other machines. The Pi allows access to other machines on the network either from the command line, or via the Pi's graphical user interface that I can forward through the ssh tunnel using VNC.

Forwarding the GUI over the tunnel is quite useful for accessing the web based user interfaces of my DSL gateway and cellular gateway routers via a web browser running on the Pi to analyze the fault and to modify port forwarding settings. A shortcoming of this approach, however, is that the web browser is quite slow on the Pi, especially when used remotely. Also, it doesn't handle some of the web page input fields very nicely, so some configuration tasks are a bit tricky. When a colleague recently showed me a much simpler and faster solution, I immediately jumped ship:

Firefox-socks-configInstead of forwarding the graphical user interface of the Pi through the ssl tunnel, the ssh client can also be used as a SOCKS proxy for Firefox (or any other browser for that matter) running on my notebook. When a web browser is used in SOCKS proxy mode, all web page requests are tunneled from the local ssh SOCKS proxy tcp port to the SOCKS proxy server running as part of the ssh daemon process.

In practice, it's surprisingly simple to set-up. On the Raspberry Pi side, no configuration whatsoever is necessary! On the client side, the command to start the ssl client as as SOCKS proxy looks as follows on the Linux command shell (on a Windows machine, Putty should do the trick):

ssh -D 10123 -p 22999

In this example, 10123 is the local port number that has to be used as the SOCKS port number in Firefox as shown in the picture on the left. The '-p 22999' is optional and is given in case the ssh server is mapped away from the standard port 22 for ssh to 10123.

In Firefox, the SOCKS proxy mode has to be configured as shown in the image. In addition 'network.proxy.socks_remote_dns' has to be set to 'true' in 'about:config' so the browser also forwards DNS requests through the SOCKS connection.

Obviously, transmitting html pages instead of screen updates over the ssh connection makes the process of interacting with the web interfaces of the remote routers a lot snappier. And by the way: The proxying is not limited to web servers in my network at home as the SOCKS server running as part of the ssh daemon on the Raspi is also happy to establish a TCP connection to any server on the Internet. Also, any other SOCKS capable program such as the Thunderbird email client can use the proxy to tunnel their traffic.

Before my colleague told me I never thought this could actually be done by ssh, as this proxying capability is not part of of the original ssh functionality. Wikipedia has a nice post on how SOCKS works: When a SOCKS capable program (e.g. Firefox) contacts the proxy for a new TCP connection for the first time from a new TCP port, it tells the local SOCKS front end which IP address and port it wants to contact. The front end then contacts the SOCKS backend over the ssh tunnel on the Raspberry Pi which in turn will create the connection to the requested IP address and TCP port. The browser then goes ahead and sends the http request over this connection. The SOCKS frontend can establish many independend TCP connections simultaneously as it can distinguish different data streams from the local TCP port the socks capable program has initially established the connection from. How nifty 🙂

SSH Client Certificates to Talk to My Raspberry PIs

I like to interact with my Raspberry PIs at home on the shell level for lots of different things and I can't count the number of times I open a remote shell window every day for various purposes. I also like to keep my virtual desktop tidy so I usually close shell windows when I'm done with a specific task. The downside is that I have to type in the server password frequently, which is a pain. So recently a colleague of mine gave me the idea to use ssh client certificates to get rid of the password promts when I open a new ssh session to a remote server. There are a few things that have to be put into place and I thought I'd put together a quick mini-howto as the information I could find on the topic was a bit more confusing than necessary.

Step 1: Create a public/private key pair on the ssh CLIENT machine

  • Check that '~/.ssh' exists
  • Generate a public/private keypair with: 'ssh-keygen -t rsa'
  • The command generates the following two files in '~/.ssh': id_rsa and

Step 2: Put the public key part of the client on the ssh SERVER machine

  • Check that in the home folder of the user you want to login as that the .ssh directory exists
  • Then do the following:

cd .ssh
nano authorized_keys

  • Add the content of the client file to the authorized_keys file on the server side

Step 3: Configure the SSH Daemon on the SERVER machine to accept client certificates

These commands make the SSH daemon accept certificates:

  cd /etc/ssh

  sudo cp sshd_config sshd_config.bak

  sudo nano sshd_config

  –> make sure the following three lines are uncommented:

  RSAAuthentication yes
  PubkeyAuthentication yes
  AuthorizedKeysFile %h/.ssh/authorized_keys

  • Restart the SSH daemon to finish the process with: 'sudo /etc/init.d/ssh restart'

Once done, ssh can be used the same way as before but there's no password prompt anymore. Great!

Panopticlick and Online Privacy

I prefer not to be tracked by ad-networks and other 'services' on the net and so far thought I was pretty much o.k. with having my cookies deleted whenever I end a browser session and having flash cookies disabled by default. But now it seems this is not quite the case.

Have a look over at the Electronic Frontier Foundation's (EFF) Panopticlick project and run the test yourself. By analyzing the user-agent information the web browser gives the web server when it connects together with additional information that can be queried and returned by JavaScript and Flash content embedded in a page, it is in most cases possible to uniquely identify you again. Yes, uniquely, as the combination of browser version, available fonts on the system, their reported order, time zone, screen size and a couple of other parameters generates such a wide range of combinations.

When running my browser as it is, my PC is identified as unique among 1.2 million devices already tested. If I activate No-Script to prevent JavaScript and Flash to execute on that page, the detection rate is down to 1:6815 devices. Still a shocking number. And if you add my German IP address into the mix in combination with the browsers language set to English, the whole thing probably blows up again.

Pretty solid research from the EFF and I hope we come up with some browser plug-ins soon that randomly change some of this information from time to time to protect my privacy.

Breaking HTTPS Connections in Two Parts Considered Harmful

Last week, About Mobility and Masabists ran a story on the difficulties mobile transcoders have with secure HTTP connections and how they can put themselves into the the connection and thereby breaking end to end security. I've done some research on my own and since I am quite opinionated on the topic I wanted to post my results and thoughts here as well.

O.k. so first of all, what is the fuzz all about in simple words: Today, when somebody uses his mobile browser to connect to his bank, a secure HTTP (HTTPS) web connection is established to the mobile portal of his bank. HTTPS means that before any data is exchanged the banking portal sends a certificate the browser can cross check to ensure that the browser really talks to the server of the bank and has not been redirected to another site. After the certificate has been received, an encrypted end to end connection is established that no one, not even a mobile transcoder in the network can put itself into.

So for the user this is good since he can trust HTTPS to verify that he is really connected to the bank and he can also trust that all data that is sent and received is encrypted from end to end. For the transcoder this is bad since it has no chance to transcode the content and do other things with it.

So some smart people came up with what is called link rewriting to circumvent this 'issue', if you want to call it that way. With link rewriting, the transcoder doesn't forward a web page to the user with an original HTTPS link but with an HTTPS that points to the transcoder itself. When this HTTPS connection is established a secure connection is only established to the transcoder itself. The transcoder then establishes a second HTTPS link to the original server.

This means that the user no longer has a end to end encrypted protection but has to trust that the transcoder keeps his data secret. Also, the user can no longer verify if the transcoder contacts the original server, as the certificate that can be queried in the browser is that of the transcoder and not that of the original site.

In addition, this is totally transparent to the user as he will still see the "lock" icon that suggests an end to end secure and encrypted connection. Only when the user actively looks at the certificate will he actually see that the connection is terminated at the transcoder.

Counter measures: The only way for the user to ensure that this does not happen is to save the original https link as a bookmark. This way the transcoder has no possibility to rewrite the URL and hence can not put itself in the transmission chain.

From a user point of view I consider breaking a HTTPS connection in two parts as very harmful. It only takes one incident where data is stolen via a leak in the transcoder to damage HTTPS' reputation. Also, if it is suddenly acceptable to break HTTPS connections in two parts for reformatting purposes, why not also use it for statistics or to ensure no content the provider does not approve of traverses the network!? No way, the data inside an HTTPS connection only belongs to the user and the server at the end and to no one in between, no matter how good the intentions of the party in the middle are.