A Duplex Gap Question

When I was recently looking at frequency band assignments to US carriers in the 700 MHz band by the FCC I noticed one thing that, from a European perspective, looks a bit odd. Perhaps somebody can enlighten me?:

LTE bands 12, 13, 14 and 17 in the 700 MHz frequency range are assigned to different network operators and each comes with an individual 20 MHz duplex gap. 10 MHz for uplink, 10 MHz for downlink and 20 MHz for the duplex gap, 40 MHz together. Multiplied by 4, that's 80 MHz for duplex gaps.

In Europe, band 20 in the 800 MHz range that is used by three network operators with 2x10MHz channels each only has a combined duplex gap of 11 MHz. To me that looks a lot more economical then spending 80 MHz for duplex gaps!? But perhaps I am missing something!?

Are those duplex gaps in the US used for anything or are they just wasted space?

Update: Thanks for the comments below, I have followed up on this thanks to them in this post.

Probing Layer 1 – Part 2: UMTS Layer 1 Visualization With SDR-Sharp

Since introducing SDR-Sharp in a previous post, I've had a lot of fun discovering a lot of stuff on layer 1 all throughout the spectrum. This post shows a couple of screen shots of UMTS carriers in the uplink and the downlink direction.

Umts-downOne limitation of the tracing solution is the maximum tracing bandwidth which is limited to around 2 MHz. While this is good enough to show several 200 kHz GSM carriers on the frequency axis it is by far too narrow to show a full 5 MHz UMTS carrier. But what it can show quite nicely are the signal flanks at either end of the 5 MHz channel or the gap between two 5 MHz adjacent carriers. The later is shown for the downlink direction in the first image on the left. Forget the pseudo signal energy in the middle of the diagrams as this is introduced by the hardware and is not received over the air. Apart from identifying clearly that there are two adjacent carriers on air the image also shows data transmissions on the two carriers. While taking this screenshot my mobile was on the left carrier and I downloaded a mobile web page which left the redder and broader streaks in the middle of the screen. As even this light load can be seen it can be assumed that at that time both carriers were pretty much idle.

Umts-upThe second image shows the same channels in the uplink direction somewhat lower on the frequency axis. At the bottom of the waterfall diagram both uplink channels are unused. Then about 40% into the waterfall I clicked on a link in the web page to start a download. This requires data transmission in the uplink. In this case my mobile transmitted on the carrier on the right. There is some signal energy on the left of the waterfall diagram but this seems to be a reflection of the right carrier, again introduced by the receiver and not really on the air. One can also see quite nicely where actual data was transmitted (the red parts) and where only radio signaling information was exchanged with lower energy (the yellow parts). Also it seems my mobile was redirected as it started uplink communication on the left carrier (the somewhat more solid small yellow line) but the network then took the communication to the second channel.

In case you want to try yourself and wonder where to find UMTS carriers, this UARFCN calculator page gives you the needed details. Have fun!

Raising the Shields – Part 5: The Onion Router (TOR)

Using the Internet privately and anonymously with an off the shelf web browser is next to impossible. The combination of IP address, cookies, what the browser willingly tells web servers about you, add-ons such as Flash communicating with a remote server outside of the browser context, etc. etc., leaves little privacy and anonymity. There's a project, however, that promises help and it's called 'The Onion Router', or TOR for short.

TOR is based on a network of relay nodes that forwards encrypted data packets to and from a client to a TOR entry node, nodes in between and an exit node. Before a packet is sent, it is encrypted several times and each TOR node can just remove one encryption layer. Imagine the layers of an onion and you understand why the project has chosen this name. This way each node only knows its direct neighbors and hence your original IP address is concealed.

I tried TOR a number of years ago for the first time and at the time it was far too slow for my taste for everyday use. When I recently tried it again, however, I noticed that even during high times during the day, speed is acceptable for web browsing. Don't expect multi megabit speeds though. In addition to web browsing, TOR can also be used with email programs such as Thunderbird to anonymize the location from which you access your emails and also other programs that can handle proxying such as for example SSH for remote server management and Instant messaging clients such as Pidgin.

While a number of years ago, setting up TOR was a bit of a tricky exercise, things have become much easier these days. The TOR website features a browser bundle that is easy to install and comes preconfigured for immediate use with Firefox in a separate directory from your main Firefox installation. A single click starts the TOR software and once a connection to the TOR network is established the package automatically loads the TORified Firefox that has no plugins except for NoScript to disable JavaScript. Also, it starts no external programs when requested by the web page to ensure there is no information leakage via IP connections established outside the browser context.

While Panopticlick says my normal browser is unique among 3 million other users, which means that even without cookies I am instantly recognizable by web servers, the TORified Firefox browser is only unique among 1500 others. A pretty good value.

One thing to keep in mind when using TOR is that one can't be certain if the exit node is hosted by a white hat or a black hat. Therefore beware of using usernames and passwords in SSL connections as the exit node could produce valid SSL certificates for websites on the fly if they have access to a certificate authority and thus could launch a man in the middle attack on you. There's ways to detect this, too, such as removing all SSL certificates in the TORified Firefox which triggers an alert each time an HTTPS protected web page is visited and each time a certificate is changed afterward.

All things considered, I'd say TOR is very simple to use on a PC today and being aware of its limitations in terms of exit node security it can provide anonymity while still being fast enough. In a follow up post I will have a closer look at the Android version of TOR and a TORified browser.

Probing Layer 1 GSM, UMTS and LTE with a €20 DVB-T Stick and Cool Software

Back in 2007 I ran a post about probing Wi-Fi on Layer 1 with Wi-Spy (yes, it was really 6 years ago). I've used it many time since whenever I wanted to know who else and what else was online in the ISM band. All that time I wished I had a similar tool to also visualize cellular signals. Now I have one, and all it takes is a DVB-T stick for 20 Euros with cool open source Windows software.

Rtl-usbInspired by this talk at the recent Sigint 2013 conference I decided to have a closer look at SDR# (SDRSharp), an open source software that uses a DVB-T USB stick to visualize layer 1 data from a couple of megahertz up to 2.2 GHz. In the lower bands it can even decode AM and FM radio out of the IQ data the stick delivers but that's not what I was after of course. What I wanted to use it for is to hunt for GSM, UMTS and LTE carriers. There are a number of supported DVB-T sticks with different kinds of hardware and this page on Osmocom Hardware gives further details which hardware supports which frequency ranges and the products they are built into. As I wanted to visualize cellular channels in the  750 – 2200 MHz range I needed a stick with an Elonics E4000 front end so I got a Terratec Cinergy T Stick as shown on the left which costs around €20 online.

Installation of the Windows based software is pretty simple and also works well for my purposes in a Virtualbox VM with Ubuntu as host and Windows 7 as a guest OS. There's no need to install the drivers or any other software that comes with the stick, as a driver for accessing the Realtek chip on the device is part of the SDRSharp installation process described in more detail here. Once the driver is installed, SDRSharp can be started and after selecting a center frequency in the GSM 900 band (or the GSM 850 frequency range) one can immediately see signals like in the second picture on the left.

Gsm-waterfallAs you can see the channel bandwidth of the three main channels in the picture is 200 kHz, so yes, that's really GSM signals! Also interesting is the different waterflows the channels leave. I assume that the fat red channel on the left carries a broadcast channel (BCCH) and hence all timeslots are active all the time. The other channels in the picture seem to be additional carriers of this or other cells without a broadcast channel, as the signal strength varies sharply over time which could be because some timeslots are not used when I took the screenshot.

So much for observing GSM cells. In further posts I'll have a closer look at how UMTS and LTE uplink as well as downlink transmissions can be observed and how they look like in SDRSharp.

Kudos to all people who worked on the various parts of SDRSharp and the rtlsdr library, this is really cool stuff!!!

O2 Germany and E-Plus To Merge (Yet Again)? – Part 2 – Impacts on Society

In the previous post on
this topic
I've looked at some financial and technical details of O2
Germany buying E-Plus and subsequently shutting down the network.
Paying well over 100 Euros per subscriber even in an optimistic
takeover scenario means that over many years, O2 wouldn't earn
anything from half its doubled subscriber database. Therefore I'm
looking forward to see some more discussions in the financial press
on the viability of this deal. But what about the impact on society
if choice is reduced from four independent network infrastructures to
three?

When it comes to
telecommunication networks there are a number of goals that a
government should enforce in the interest of its citizens and the
long term stability of companies:

Long-Term Business
Prospects For A Company

Obviously, a state needs
to take care that the framework in which companies operate in and
compete with each other allows fair competition between incumbent
companies and new-commers and offers as many of them as possible the
opportunity to thrive. By reducing the number of network operators
from three to four, the total revenue for each of the remaining
companies is higher. If the infrastructure of a network operator is
used by more subscribers, fixed costs for transmission lines, new
equipment and perhaps also rental costs might be reduced on a per
subscriber basis. This can improve the bottom line of the company and
reduce end user prices. It's then up to competition to decide how
much goes into which bucket. It should be noted, however, that having
twice the number of users on a network does not cut the cost in half
as overall network capacity has to be higher. You might not need
additional antennas but base stations will have more hardware
elements to handle twice the amount of data going through the node
and backhaul links need to have a higher capacity as well, which
costs additional money as well. Nevertheless, more users on a network
will result in decreased costs per user.

End User Prices


The first thing most people will think about when it comes to telecom
services is price. The cheaper the better. As discussed above, going
from four to three networks will reduce costs per user in that new
network. It is then up to competitive forces if the cost advantage
will benefit users. But will competition between three independent
network operators still be as beneficial to subscribers as it is
today with four network operators? Countries such as France and
Belgium, for example, that had and in Belgium's case, still have only
three network operators, competition and prices were far higher than
in other countries with more network operators. Only the launch of a
fourth network operator in France finally brought the necessary
competition to finally nudge network operators towards offering
mobile Internet packages comparable to those available for many years
in other countries. The picture completely changes when looking
towards countries that in the past had four or even more network
operators. Take the UK and Austria as prime examples that at some
point had four or five network operators. Prices were low, and in the
case of Austria, nationwide coverage and speeds were excellent. And
those operators complaining about fierce competition still had EBITDA
margins of 25% and above. In recent years the
situation has changed in both countries, with the UK on the best way
to a network infrastructure duopoly and three independent network
operators in Austria. In the case of Austria, however, the
concessions that had to be made
for the takeover of Orange by
Hutchison were hopefully a good way to ensure
continued competition of three networks in the future. Only time will
tell.

Geographic Availability

Price is not everything
even though that might not be perceived by users when thinking about
the topic. But as soon as they travel to the countryside and find
themselves out of high speed Internet coverage they might reconsider.
With four network operators there were two that decided to have more
coverage on the countryside compared to their competitors which
leaned on the cost sensitive side. With only three network operators,
there could be more money per network operator to spend, especially
for the newly combined one due to the increased subscriber base.
Also, the other two network operators might have more money to spend
as some subscribers are likely to jump ship during the network merger
process. But would this additional breathing space actually be used
to improve rural coverage? Again looking towards other countries with
three network operators and comparing rural coverage with countries
with four or more network operators might bring hold a clue. In
Austria for example with four network operators, rural UMTS coverage
has been for a long time been excellent and continues to be so. One
might also wonder if rural LTE coverage in Germany would be where it
is today (see here and here) had it been to market forces rather than the
auction rules that required the companies to deploy LTE in the
countryside first and put this additional investment into their
pricing structure. Personally, I doubt it.

Some argue that there are
increasing network infrastructure costs due to the rising data
traffic in mobile networks that strangle network operators.
Unfortunately they don't reference their sources. When looking at
national regulator reports such as the 2012 report of the German
regulator, nothing of the sort can be seen (see the PDF linked from this press report on
page 71). In the last 10 years, invest in telecoms equipment has been
in the order of 6 billion Euros without increases seen in recent
years. So despite usage growth, investments have not increased at all
and I have seen no data so far that would suggest that this will
change in the future.

Network Quality And How Countries With Four Infrastructures Compare

Another aspect that needs
to be considered is the per user data rates that can be achieved in
networks. Having coverage everywhere is nice but is worth little if a
network is overloaded because operators have deployed insufficient
backhaul capacity, too few carriers on the air or have spaced base
stations too far apart. Countries with four established network
infrastructures are doing well. Take the results measured by
independent companies over many years as an indication (see e.g. here and here). In contrast, the data rates I personally achieve in traditional
three network operator countries such as France are quite the
opposite. In other words, they haven't used the
reduced competition and higher prices to improving network quality
and coverage. The money must have gone elsewhere.

Network Neutrality

And before I come to a
close I'd finally like to spend a sentence or two on network
neutrality. Being an ongoing discussion in many countries and hotly
debated lately in some, going from four to three independent network
infrastructures are unlikely to help the market to ensure networks
remain service neutral on their own.

Summary

There we go, a long post
today but there are obviously many things to consider. From what I
can tell there is no precedence where reducing the number of network
infrastructures has lead to benefits to society. The comparisons
above suggest quite the opposite. Based on the financial figures of
the proposed deal I wonder if Telefonica/O2 will do itself a favor
either. Also I don't see any hard facts that the current four network
infrastructure model will lead to a failure of one of the mobile
network operators. It is going to be interesting to observe how the
situation develops over the next months. I expect that national and
international regulators will have a very close look at the proposed
deal and if the deal is not rejected I think there will at least be significant conditions and concessions required from O2 to
minimize the impact of the deal on the topics discussed above. Let's
see how that will change the financial model.

O2 Germany and E-Plus To Merge (Yet Again)? Some Thoughts on Benefits and Tech Background

About twice a year there are rumors in the German mobile industry that O2 and E-Plus are about to merge one way or other. This time, it's more than a rumor as O2 has actually made an official offer of around 5 billion Euros in cash + 17% of O2 stock for Dutch telecom incumbent KPN, who owns E-Plus. The total sales prices is thus around 8.1 billion Euros according to the WSJ. I wonder how this makes sense from a financial point when considering the significant sum of money involved and the drastic network changes likely to be required to form a single network!?

On the financial side KPN has always touted that E-Plus is its cash cow, generating an annual EBIDTA of 1.353 billion Euros out of a total revenue of 3.236 billion Euros in 2011 according to their Wikipedia entry. O2 Germany has a similar EBIDTA from a total revenue of 5.21 billion Euros in 2011 (see here and here). So both companies are profitable and with a market share of around 20% each that still grows are far from being the lame ducks of the German mobile network industry as some market commentators suggest. But financial numbers can be interpreted in many ways and I am not a monetary expert so I won't dig deeper into this part of the story.

Let's have a look at some technology related implications of such a deal. "Die Zeit" reports that O2 Germany's CEO estimates potential cost savings of the deal of 5.5 billion Euros but no timeline was given for realizing those savings. At the beginning I think it is likely that a massive amount of money has to be spent on forming a single network. Obviously it makes no sense for a single company to run two overlapping networks. As the majority of both networks overlap today, one has to be switched off and base station installations have to be removed. Quite a bit of work and cost involved to remove 19.000 base station sites. I wonder if there's a market for second hand network equipment where some money can be made or if the equipment, which is unlikely to be the latest kit, will just have to go to the bin. In the long run this will of course reduce base station site rental costs. As many base station sites are shared and owned by another company I wonder if the rental prices for other companies at those sites will go up as a result of the reduced number of companies that rent tower space?

Obviously, switching off one half of the network requires increasing capacity on the other network as otherwise it would go into overload with the additional traffic. O2's network seems to be already stretched in many areas so increasing capacity will incur significant cost that would otherwise not be necessary. Binning half the network and increasing capacity on the other half, I wonder what the cost of this would be!?

From a timing perspective the deal comes at an interesting point in time. The current GSM licenses are due to expire in 2016 and are set to be re-auctioned. An interesting time to cease operation, E-Plus would have made a very good use of their expenditure for their initial spectrum. Also it's likely that there will be little delay on the forecast spectrum auction in the 2016 timeframe, which will also include new spectrum in the 700 MHz band for the so called Digital Dividend 2 spectrum. Only three players, and I assume there won't be more as building up another network when one has just ceased operations is unlikely to happen, would reduce competition and thus spending on the network operators part. Here's definitely savings I can see for O2 on the horizon.

On the downside, if the deal went ahead, O2 would not be able to keep E-Plus spectrum that runs beyond 2016 which includes the UMTS spectrum and newly acquired spectrum during the 2010 spectrum auctions. E-Plus currently holds five chunks of 5 MHz in the UMTS 2.1 GHz spectrum, two from the initial auction and three from the 2010 auction. This is more than any other operator holds in this band and would go back into the pool for the next frequency auctions scheduled for 2016 together with the spectrum acquired in the 2.6 GHz range for LTE. A massive loss of investment!

Let's summarize: O2 would pay the equivalent of 8.1 billion Euros for E-Plus, get no network, has to shed all frequency licenses and has to invest massively into its own network to absorb E-Plus customers. Also it's likely that the other two network operators would jump at the opportunity and try to get some of E-Plus and O2 customers onto their networks that might not be happy with how the network performs during the switchover. In other words, the only thing O2 would get out of such a deal is E-Plus' subscribers but nothing else, no assets whatsoever. O2's CEO estimates that cost savings could be 5.5 billion, but coming from a CEO who wants to push a deal it has to be assumed this is an optimistic number. Spending 8.1 billion and perhaps getting 5.5 billion back over a longer timeframe!? Does that make financial sense? One has to wonder… So taking those numbers O2 would have to spend 2.6 billion for perhaps 24 million customers. That's 108 euros per customer (not including interest, overly optimistic cost saving figures, etc.). That, on the other hand, does not sound very expensive.

So much for now. Another thing that has to be considered, though, is the impact of such a deal on competition, i.e. what would change for consumers. That's for another post, however.

Massive CSFB Speed Improvement in LTE Live Networks

LTE has been great so far because of its speed and because it brings high speed wireless Internet to the German countryside.  One major downside of LTE on my smartphone so far, however, have been the very long call establishment times for incoming and outgoing voice calls due to the required fallback to GSM or UMTS.

In practice I observed typical CSFB (circuit switched fallback) times of about 2.5 seconds in live networks in addition to the normal 3G call setup time. A call from an LTE to LTE smartphone thus takes 5 seconds longer than a 3G mobile to mobile call establishment, which is around 5 seconds. 5 vs. 10 seconds, it almost felt like eternity.

Recently, to my surprise however, setup times have significantly reduced in my network of choice and CSFB calls from between two LTE mobiles are now established almost as fast as pure 3G-3G calls. The difference is around half a second at most. Something that one can quite live with.

Kudos to the network engineers, LTE is now finally usable for me on smartphones!

Real World Interaction: A Raspberry Pi as a Water Alarm System With Internet Connectivity

A couple of weeks ago I wrote about my re-discovery of the fascination and use of the electronics kits I experimented with in my youth and how I wanted to make good use of them again in combination with a Raspberry Pi. The project I had in mind and which has borne some fruits now is a water alarm system with Internet connectivity.

I'm a practical guy so playing around with new hardware and software always has to have an application for me. When you enter the kitchen in the morning and are welcomed by a pool of water on the floor you instantly know something is wrong. In my case it was a leak in the rooftop that subsequently proved to be a bit difficult to find so we went through a trial and error phase. During the trial and error phase I wanted to know at once when water started accumulating on the kitchen floor again to take the appropriate counter measures.

Pi-hardware1A perfect application for the Raspberry Pi that could warn me of a new water pool building on the floor via email. The Pi itself has some I/O pins that are, however, not well protected so I decided to buy one of the hardware extension boards that offers buffered and protected I/O ports. There are a number of different boards available and my choice fell on the Pi-Face as it's the same size as the Raspberry Pi and hence I could fit it into a small casing. As the Pi-Face is only a generic I/O board I needed additional hardware to detect water on the floor. This is were my electronics kit came in for prototyping a detector as seen on the first picture on the left.

Pi-hardware2Once this was working I decided to go for the real thing and build five of those sensors on a real board so I could place five detectors at different locations on the floor. The second picture on the right shows how the final solution looks like: The casing contains the Raspberry Pi with a tiny Wi-Fi adapter below the PiFace which is connected to the self made electronic board via a number of cables. From there 2x five 3m sensor cables leave the casing on the right to different locations on the floor. On the other end I just taped the uninsulated cables to the floor. Water between the ends of two cables change the resistance between the two cables which is detected by the detector on my self soldered board.

When one of the detectors on the board recognizes a change in resistance at the end of the cable it drives an input port on the PiFace which in turn is detected by a Phython program running on the Pi. The Phython program in turn will immediately send me an eMail to notify me of the event. The program also sends me regular status updates of all input ports and also notifies me in case an input is switched off again, i.e. the water has disappeared again.

Sending an email with Python by the way is pretty much straight forward as there are already libraries that can even handle encryption via secure SMTP. I've attached the source to this part of the program at the end of this post as this could come in quite handy for other projects you might to want to try.

Quite frankly I wouldn't have gone through the whole thing if I just had a water leak. But a RasPi project, real world interaction, connectivity to the Internet, a little electronics project and a real world problem to solve was too hard a thing to resist.

And here's the source for sending email in Python:

As I wanted the email transmission to be independent from the rest of the alarm system I decided to spawn the python email code in an independent process. If something fails here, the system would still work and continue to monitor the alarm sensors and show the result on the LEDs on the PiFace. Also should one email task get stuck for one reason or another I would still get informed of the problem with the next periodic status email. Here's the code to spawn a new independent task without waiting for it to be finished:

EMAIL_SCRIPT_WITH_PATH = "/home/pi/send-email.py"
EMAIL_FROM = "test-name@domain.com"
EMAIL_TO = "my-name@another-domain.com"
EMAIL_SERVER = "smtp.my-domain.com"
EMAIL_PASSWORD = "very-secret-of-course"
EMAIL_PORT = "587"

    syslog.syslog ('sending status email')
    subprocess.Popen([EMAIL_SCRIPT_WITH_PATH, EMAIL_FROM,
                      EMAIL_TO,
                      "System status: " + PrintString,
                      "Status of monitoring system " + PrintString,
                      EMAIL_SERVER,
                      EMAIL_FROM,
                      EMAIL_PASSWORD,
                      EMAIL_PORT]).pid

And on the other end, send-email.py looks like this:

#!/usr/bin/env python

# send a text email from the command line using python
#
# version 1.0
#
# Original code found at http://www.cs.cmu.edu/~benhdj/Mac/unix.html#smtpScript
#
# NOTE: if smtp username is "" then code will not use the smtp authentication method
#
# input parameters
#    sys.argv[1] is the sender email address
#    sys.argv[2] is the reciever email address,
#        this can be a comma separated string for multiple recievers
#    sys.argv[3] is the subject text
#    sys.argv[4] is the body text
#    sys.argv[5] is the smtp host
#    sys.argv[6] is the smtp username
#    sys.argv[7] is the smtp password
#    sys.argv[8] is the smtp port
#

import smtplib, email, sys, time
import syslog
from email.mime.text import MIMEText

# check to make sure the number of arguments is correct
if len(sys.argv) != 9:
  print 'Usage: pythonEmail.py <sender> <receiver> <subject> <bodyText> <smptHost> <username> <password> <port>'
  sys.exit(1)

# get the argv variables
sender = sys.argv[1]
receiver = sys.argv[2]
subj = sys.argv[3]
bodyText = sys.argv[4]
smtpHost = sys.argv[5]
username = sys.argv[6] # use "" if no SMTP authentication is required
passwd = sys.argv[7] # ignored if no SMTP authentication is required
port = sys.argv[8] # ignored if no SMTP authentication is required
 
# create a list from the receiver in case we have a comma separated string of multiple receivers
rList = []
rList = receiver.split(',');

# setup the message header
msg = MIMEText(bodyText)
msg['Subject'] = subj
msg['From'] = sender
msg['To'] = receiver

# determine if a passworded smpt host is being used and connect as necessary
if username == "":
    server = smtplib.SMTP(smtpHost) # smtp server is not password protected
else:
    server = smtplib.SMTP(smtpHost, port)
    server.login(username, passwd)

failed = 0
failed = server.sendmail(sender, rList, msg.as_string())
server.quit()

# return the status
if failed:
  print 'send-email.py: Failed:', failed
  syslog.syslog('send-email.py: Failed: ' + str(failed))
else:
  print 'send-email.py: Finished with no errors.'
  syslog.syslog('send-email.py: OK: ' + str(failed))

Have fun hacking!

Raising the Shields – Part 4: Encrypting E-Mails and How Search and My Smartphone Stand In the Way

On my way to putting some more privacy through encryption and self hosting between me and the rest of the world the next step was looking at email as that is certainly one of the main means of communication for me.

As I already use Thunderbird as my email client instead of a web mailer interface, getting PGP (Pretty Good Privacy) encryption to work is quite easy. The only thing that is required on my Linux notebook is the installation of the Enigmail plugin in Thunderbird, which is straight forward. On a Windows box, GPG (Gnu Privacy Guard) has to be installed in addition.

Once installed, the next step is to create a public/private encryption key pair of which the public key is then distributed to friends and colleagues so they can use it to encrypt email they want to send to me. The other end needs to do the same and once you have imported someone's public key into Enigmail's key repository, encryption works both way. Also, each end can digitally sign their emails so it can be verified that the email is not forged.

So much for the elevator pitch version, for detailed step by step instructions on how to get this working, have a look here.

Simplicity is Key

As I want to use email encryption to communicate with non-technical people one thing that is very important to me is that the Engimail plugin can be configured to automatically encrypt emails to addresses for which a public key has been imported. While not straight forward, this can be done by creating Enigmail encryption settings per email address. One can also configure Enigmail not to ask for a password to access the key store which makes encrypting and decrypting emails completely transparent to the user. Not quite ideal from a security point of view but probably the only option from a non-technical user usability point of view…

There is one big catch, however: Emails remain encrypted on the PC and searching the body text later on in Thunderbird is not possible as the decryption module is not hooked into search. I don't search my emails a lot but I need that function from time to time to find an important email I have sent or received ages ago. A pretty high price to pay for encrtyption if I can't search my email anymore. The obvious solution for this would be too hook decryption into the code that searches my email database. Another option would be, since my hard drive is encrypted anyway, to remove encryption from received and sent emails and only keep the sender's signature. This way, search would work again and emails would remain readable.

PGP on Mobile

I also need encryption and decryption of my emails to work on my Android smartphone. Again it turned out that I have the necessary stuff already in place since I already use K-9 mail instead of Google's native Android email program. While K-9 doesn't support PGP encryption out of the box there's an OpenPGP plugin called APG in Google's app store. K-9 needs to be reinstalled after APG is up and running but this is quite painless by exporting and importing K-9's configuration to a file.


Multipart-failUnfortunately, and that's another big catch for me, APG only supports simple emails.
Emails that come in multipart MIME format, e.g. because there's a file attachment, or because it has been setup up this way by the originator are not yet supported. When looking at the APG website and mailing list, it looks like there has been no real development since 2010. In other words, the project seems to have stalled.

Things That Are Never Encrypted

Despite encryption, the sender and receiver of an email are always sent as plaintext, so the metadata of whom I communicate with can still be recorded. Also the subject line of encrypted emails is also in the plain, something that one should also be aware of as well.

Summary

Being unable to search through stored emails that are encrypted and K-9's very limited PGP support, secure emailing becomes quite impractical for me for the moment. A typical
convenience trumps security decision. But these shortcomings are not inherent to the basic encryption process and could be fixed in furture software versions of Enigmail, Thunderbird and K-9.

The Presence Dilemma

Perhaps I'm old school but I have a presence dilemma. I'm referring of course to the presence status of many instant messaging applications that show all my contacts whether I'm currently online, offline or in a state in between.

For me the dilemma is that I feel that there is a difference between being online and having time or being in the mood to engage in a conversation. When I receive an instant message out of the blue and don't have time to respond I sometimes don't feel comfortable to reject the conversation as that might be seen by the other party as rude, especially if he or she is also 'old school'. Also I have to remember to 'text back' once I have time. Also not ideal.

I could of course set my client to 'invisible' but then I would forget later on to switch it back to 'available' when I am or feel reachable again. And no, I don't want to go fully offline with my instant messaging client as sometimes I still want to be reachable for a select audience.

Yes, human interaction is complicated (or perhaps it's just me?) and instant messaging presence is far from reflecting my reachability status.