UMTS Security Undermined By SS7 – The Bigger Picture

Until December 2014 I thought that the UMTS air interface was secure. After all, the air interface is much more complex than the GSM air interface and strong authentication and encryption is used. It felt good. And then, a few days before 31C3, news broke that security researches will demonstrate a way to passively intercept SMS messages sent over the UMTS air interface with cheap equipment if the attacker has access to the signaling network used by wireless networks, known as SS7 (Signaling System No. 7), anywhere in the world. And the scenario is only the tip of the iceberg as it turned out only a few days later.

Attacking UMTS Ciphering From The Inside

I never thought about such a scenario before but with the clues given in the article it only took me 5 minutes to figure out the details by having a closer look at the MAP (Mobile Application Part) specification in TS 3GPP TS 29.002. When a subscriber moves from one MSC area to another, the MSCs need to exchange subscriber information and chapter 8.1.4 details the Send Identification service which transfers, among other things, the current ciphering keys from one MSC to another. These ciphering keys can then be used to decrypt transmissions on the UMTS air interface to and from a particular subscriber. The presentation at 31C3 by Karsten Nohl of Security Research Labs a few days later then proved that my assumptions were correct. The slides can be found here and a video of the talk has been posted here.

From a psychological point I found this quick discovery quite interesting. While the message is necessary for proper mobility management in a network and I've known about it from my days as a core network programmer, it never crossed my mind before that this could be exploitable if such messages are routed across network and country borders. But when looking at this from a different angle it becomes immediately obvious. And it seems I was not the only one not to see this because it was reported that all four German cellular networks had no filters in place at network boundaries to prevent such queries. Fortunately, all four reacted quickly and put message filters in place to stop the abuse.

Re-routing Attacks

Unfortunately, the filter just stops this particular exploit. A further talk at 31C3 by Tobias Engel and a presentation by 'Positive Technologies' given earlier in 2014 at a conference in Moscow reveal several other possibilities to exploit the implicit trust between cellular networks to enable roaming between country borders. With access to the global SS7 network anywhere in the world an attacker has several ways to re-route a call to a subscriber somewhere else to record it and then forward it again to the destination. This can be done by sending fake USSD (Unstructured Supplementary Service Data) messages to the HLR (Home Location Register) to activate and deactivate immediate call forwarding. This way an incoming call can automatically be forwarded to a recording station. Once it arrives there the call forwarding is removed and another call is made from the recording station back to the subscriber. Another way described in the presentations linked above is to use the SS7 based CAMEL protocol to change the destination of a call during the establishment process without the need of changing the call forwarding settings in the HLR.

While call re-routing is probably most interesting for spy agencies for political and industrial espionage, researchers have also shown how it is possible redirect SMS messages by sending fake subscriber registration messages across international SS7 links. This way a mobile device is deregistered at its current location and seems to have traveled across international borders. Any incoming calls and SMS messages are thus re-routed to the attacker who can sit anywhere in the world. The subscriber doesn't notice the deregistration as his mobile device continues to show that it is connected to the network. This won't work for long as sooner or later the device will make a periodic location update at its current location or tries to access the Internet and as a consequence the fake registration is deleted. When timed correctly, the temporary redirection can be used for fraud. In combination with banking Trojans that collect banking website login PINs and the knowledge of a user's phone number a confirmation SMS for a transaction triggered by the fraudsters can be redirected into their lap without the user even noticing it. A scary scenario.

Ways To Stop It

The only good news is that these attacks are not passive as they leave traces in the logs of network operators. But that's about it then. In practice it is probably difficult but not impossible to get access to the international SS7 network. For intelligence 'services' around the world it should be no problem whatsoever. So what can be done?

  • The first step has already been taken by some network operators by blocking requests for the current ciphering keys from outside their networks.
  • Some of the re-routing attacks, e.g. changing call forwarding settings from abroad via USSD can be prevented by plausibility checks, i.e. the HLR or a box in front of it has to verify that the USSD message comes from the Mobile Switching Center to which the subscriber is currently attached to. To prevent spoofing of the sender's SS7 point code, a network operator's international SS7 gateway has to ensure that only messages with international point codes are allowed into the local network.
  • Check CAMEL modification messages: The service logic in MSCs must ensure that only Service Control Points (SCPs) from a predefined list of Global Titles can be informed about call establishments and other operations.
  • Encryption of national SS7 links: To prevent foreign intelligence services to tap SS7 links in other countries, all SS7 traffic between locations must be encrypted and integrity checked.
  • Monitor changing call forwarding settings: Most people don't change their call forwarding settings regularly. I'm probably an exception. A box in the network could watch out for frequent and thus  suspicious call forwarding changes and warn the operator and subscriber.
  • Plausibility check international requests for authentication material: Even after barring the exchange of the current ciphering key, networks can still request authentication and ciphering material for subscribers of other networks. This is the basis for international roaming but may also allows those with access to UMTS IMSI catchers to get valid keys. The only way to counter this is to check if an authentication vector request is likely to be valid. If a request comes in from abroad while the mobile just recently made a location update in the home country, it's unlikely that the request was valid. Exceptions are border areas to neighboring countries. That makes plausibility checks not impossible but quite complicated in practice.
  • Check international registration requests: The same checks as described in the bullet point above have to be applied to registration requests to prevent enhanced re-routing attacks. As above, preventing such fraud is not impossible but not straight forward to implement in practice.
  • Allow subscribers to toggle a "Home Network" lock: If bad comes to worse this would stop any kind of foreign attack if such a lock would block all requests for ciphering material, registrations, etc. etc. from international SS7 links. I'm sure a lot of politicians and high value espionage targets would sleep easier. I'm not sure if this is the same as just deactivating international roaming like it can be done already today… And by the way, such an approach is not novel. Some credit card companies, for example, restrict the use of their cards to countries in which EMV chip/pin authentication is used and require their customers to temporarily unlock their cards if they travel to parts of the world where the magnetic stripe is still used.
  • Name, shame and ban: Networks from which illegal SS7 messages are sent should be made public so other network operators can react and also put counter measures in place. If I were a network operator I would also think about terminating my business with that network and blocking all traffic from there. Some examples made public would probably work wonders to convince network operators to keep their back yards clean.

This list is by no means complete and just a result from some initial thinking. I'm sure there is a lot more that can and should be done. Perhaps some of these things are already done by some network operators today but I have no insight into this so I can't say.

So far, nobody has spoken about how to compromise LTE security over international links. This is probably because international LTE roaming is not based on SS7 but on the IP based DIAMETER protocol. The issues are similar however, because the principle is the same: Cellular networks have to trust each other for international roaming to work.

And finally, it's important to understand that none of the SS7 issues discovered by researches and described above require to break any kind of ciphering, to exploit implementation flaws, generate stack overflows to insert malicious code or to apply social engineering to trick someone to do something. Instead they just make use of the protocol in ways it was never intended to be used. In other words, the only way to fix this is to move away from totally trusting external networks and put checks in place that detect and prevent such attacks. Now that things are in the open I guess the industry has some work set out for it to do.

New Spectrum in Germany’s 2015 Spectrum Auction

2015 is going to be another interesting year in wireless in Germany as another spectrum auction will perhaps again significantly influence the wireless landscape in the center of Europe.

There's lots of different angles to look at this spectrum auction, including competitive aspects, how much of the spectrum that companies already use today they want to re-acquire, whether it will be a real auction or just an amicable get together now that only three network operators are left, what will be done with the auction result, i.e. reinvested in telecoms, and if so how, or whether the money is funneled into other channels like in the past, etc. etc.

But no, I don't want to look at any of these aspects. Instead, let's have a look at the 2015 spectrum auction from a technology point of view: In addition to re-auctioning spectrum assignments in currently used bands such as the 900 and 1800 MHz bands that expire in 2016, two additional bands will be part of the action as reported by Teltarif and Heise:

Digital Dividend 2 in 700 MHz

The first new band is comprised of 2×30 MHz blocks (one for the uplink and one for the downlink) in the 700 MHz region currently used for TV broadcasting. This part of the spectrum has to be freed up first as part of the "Digital Dividend II" program which foresees terrestrial TV broadcasting to move from DVB-T to the more efficient DVB-T2 standard. The two 30 MHz blocks are between 703 and 733 MHz and from 758 MHz to 788 MHz. That's a subset of the already standardized LTE band 28, which, according to Wikipedia, has been a result of the Asia Pacific Band Plan (APT). In other words it has the same size as the Digital Dividend band (LTE band 20) already used today in the 800 MHz band.

A First in 1400 MHz

The second new band is foreseen for downlink only use between 1452 to 1492 MHz, i.e. 40 MHz. That's a subset of LTE band 32 (1452 to 1496 MHz). As it is uni-directional, it must be used as part of an LTE Carrier Aggregation setup together with another bi-directional band.

And that's it as far as new spectrum is concerned. Making a chunk of uni-directional 40 MHz spectrum available shows how little spectrum there's still available in the area below 3 GHz. Anything above is unlikely to be of much use in a macro cell network setup (i.e. a cell site covering a radius of several hundred meters to a few kilometers).

But every MHz counts and it's going to be interesting to see how things develop in the months to come.

RSync for Backing Up My Owncloud

I like to have a plan B so I regularly backup my Owncloud document folder to an external storage device and, in addition, to another Owncloud installation running on a Raspberry Pi so I can active this instance should my main Owncloud installation ever fail while I'm not at home. So far, I've always copied over the complete document folder to the Raspi which takes quite a while as it contains several gigabytes of data. Recently, however, I decided to have a closer look at the rsync command and noticed that it would be ideal to speed up the process as it can compare source and destination and only copies the parts of files that have been modified. Here's the command I put together after reading a couple of "how to's" that exactly fits my needs:

rsync -avzh –rsync-path="sudo rsync" /media/owncloud-drive/data/ pi@192.168.42.3:/media/owncloud-drive/data/ –progress –delete

Looks a bit complicated but it's pretty much straight forward:

  • -avzh are the default options to use rsync in "a" = archive mode which goes through the directory recursively and preserves permissions, time stamps and access rights. 'v' stands for verbose output, 'z' for compressing data before transmission, and 'h' for human readable format.
  • –rsync-path is used to run the rsync instansce on the remote RaspberryPi with admin rights which are required to copy the owncloud folder that needs to be accessible from the "www-data" account that is used by the web server.
  • /media/owncloud-drive/data/ is the path to the local owncloud data folder that is to be copied to the destination.
  • pi@192.168.42.3:… is the account, IP address and path of the remote device to which the data shall be copied.
  • –progress, as you might imagine gives more details while the command is running.
  • –delete allows rsync to delete all files at the destination which no longer exist on the source.

One shouldn't be adventurous when it comes to backups but since this is still in the test phase I ran the rsync command with Apache being shut down on the target but not on the source server. So in theory Owncloud could write to the log or the sqlite database file just at the moment the modified part of the database file is copied over and thus corrupting the destination database file. I've ran the command many times over several days now, and so far I had no issues from not shutting down Owncloud on the source server during the process. Maybe I was just lucky so far or maybe it's no problem at all, I'm not sure yet. But I'll keep you posted.

My Personal Technology Highlights in 2014

Another year is drawing to a close and as in the years before I wondered what has happened during the year. And as always I was quite surprised when I went through my blog entries of the past 12 months about the amount. So here are my personal technology highlights of 2014:

LTE and Affordable Worldwide Internet access

Last year was the year when roaming in Europe finally became affordable. But that was nothing compared to what has happened in 2014. In July, I switched to a mobile contract that removed roaming charges for voice, data and SMS in Europe for 5 Euros extra a month. In addition, many network operators have now started to roll out LTE roaming and I had my first European and intercontinental LTE roaming experiences. And on top of that, my network operator of choice decided to apply the former EU data roaming rates to the rest of the world, thus enabling truly affordable global Internet access roaming. I've used it in China and the US during the year and it worked perfectly. On the technology side, I've also mused about data roaming costs from a technical point of view.

3rd Edition of my Book on Mobile Networks Gets Published

About 10 years ago the first edition of my book on mobile networks got published. Needless to say that over the years many things have changed and new technologies have appeared on the scene. I thus always kept updating the manuscript and 2014 saw the publication of the 3rd edition of 'From GSM to LTE-Advanced – An Introduction to Mobile Networks and Mobile Broadband'.

Network Function Virtualization

In the making for a number of years now, the standardization and discussion around Network Function Virtualization is taking on shape. Having used virtualization on the desktop for quite some time now to do things like locking up Windows in a virtual machine I decided it was time to write an NFV primer. You can find the result here.

CyanogenMod, Root Access and 'Smartphones are PCs now'

Last year came the end of Symbian for me and I've been struggling since then to get my privacy back, i.e. to make Android stop talking to Google and others all the time. A fist step to this goal was to switch to CyanogenMod which brought some disadvantages but opened up a whole new world for me. With CyanogenMod and root access, smartphones really started to feel like computers to me new and I wrote a long blog entry about the next revolution in computing based on those experiences. From a practical point of view I figured out how to stop my smartphone and other devices from contacting Google and advertisers all the time to regain my privacy and to bring pleasure back to web surfing on mobile. In September I automated the blocking list update process and put the details on Github so others could benefit as well.

Security and Privacy

Like last year, security and privacy have remained important topics for me, as Edward Snowden's revelations on the scope and depth of mass surveillance continues to baffle me. 'Raising The Shields' has been my motto since and I've put together a number of things to encrypt as much of my communication as possible. With a Raspberry Pi I've put together a security gateway for VNC remote screen sessions and to encrypt both legs of the connection by using SSH tunnels. Another Raspberry Pi and a Banana Pi for performance reasons have since been put into use as OpenVPN servers. And to encrypt all my Internet traffic when I'm in public places such as hotels, I've put together scripts and configuration files to configure a Raspberry Pi as an OpenVPN client and Wi-Fi access point. The scripts and configuration files are on Github for those of you with similar needs.

2014 has also been the year of massive security issues.  Heartbleed is the one that will probably be remembered best and I had a posts about whether my Raspberry servers were vulnerable and the extend of just how bad this discovery was and wondered why nobody discussed the NSA's denial that it didn't know about this and what this would mean if it was actually true.

Over summer break, I decided to have a closer look at how assisted GPS works and found out that SUPL, one of the protocols used by some mobile chipsets reaveals my identity and location to Google every time I fire up the GPS chip. For those of you who care about the details I had a blog posts with further technical details here and how to trace a SUPL request here. But even if assisted GPS is switched off, it's still not easy to hide your location, even if a VPN is used as described here.

I was probably not the only one who was shocked to hear that whoever was behind Truecrypt decided to abandon the project as I've been using the software on many devices. Some projects followed to review the source code and to see if someone else could continue to maintain it. I'm not sure how that turned out because I decided to switch to dm-crypt (details here and here) which is truly open source and, from what I can tell, peer reviewed.

Owncloud and How to Enable Everyone

Last year I started to use Owncloud to host my own 'cloud services' at home and this has given rise to a number of interesting thoughts and projects. In 2014, I migrated my Owncloud installation to a NUC for higher performance. One problem with Owncloud is that it requires quite a bit of technical knowledge to get going. In other words, it's something for the nerds as setting up Dynamic DNS, configuring port forwarding, getting an SSL certificate and struggling with Internet lines at home without public IP addresses is not everybody's cup of tea. So over the course of the year I put together the pieces of the puzzle and came up with an idea of how to 'home cloud enable' everyone to keep private data private.

Open Source – The Joy of Fixing It Yourself

2014 was also the year I got rid of Windows at home. All computing devices in the household are running on a Linux distribution now and Windows is banned into Virtual Machines and as an alternate OS on a single machine for those very few occasions for which a Windows running on bare metal is required. Over the year I have booted to Windows at home perhaps twice.

Open source is great because you can fix things yourself. To that end I've reported a number of Owncloud issues on their Github presence, I supplied code to extend the Selfoss RSS Server/Reader platform with new functionality I wanted to have and I set up two projects on my own on Github (The VPN Wi-Fi Access Point and the stuff required for privacy on CyanogenMod described above).

And finally on the programming side, open source has helped me a lot to better understand the fabrics of the web. As part of this I worked through a book about PHP and mySQL as sometimes books still trump online research and implemented a private database application with a web frontend. As this was so much fun I used my new knowledge to put together an automated system for testing the reliability of the Wi-Fi and cellular connectivity of mobile devices with a web based interface.

Fiber Connectivity

A 25 Mbit/s downlink and 5 Mbit/s uplink at home is not bad but once you've seen what a Fiber To The Home (FTTH) connection can do it seems to be slow indeed. When I benchmarked that 1 Gbit/s FTTH connection in Paris I got a sustained 260 Mbit/s in the downlink direction. Technical details and images of the installation can be found here. But while this is all nice, I wonder if fiber will become the new monopoly and if perhaps G.fast will be a remedy!? Time will tell.

From the Terabyte SSD to Vintage Computing

That 500 GB SSD I bought last year is still brand new but I managed to use all it's capacity only a year later. So I had to upgrade my notebook once again and have ended up with a 1 TB SSD. Again, I used the disruptive occasion to get rid of a couple of other limitations by creating a separate OS partition from a dm-crypted data partition which allows me to backup and restore the OS partition in a few minutes compared to the several hours required before.

Going back in time was equally exciting. At the beginning of the year I was in Silicon Valley and had some time at last to go to the Computer History Museum. Later in the year I also visited to the Heinz Nixdorf museum in Paderborn, Germany, which declares itself as the biggest computer museum in the world. And indeed, it is a museum not to be missed if one has an interest in vintage computing.

And last but not least: This year was the 10th anniversary of my first 3G mobile. It's been only 10 years but the mobile landscape has changed dramatically during this time.

I can hardly believe all of this happened in 2014. After all, the year felt so short…

31C3 This Week – Schedule And Links To Video Streams

Like every year the Chaos Communication Congress takes place in Germany between Christmas and the new year. And like very year I wished I could go there but other things once again take precedence. Next year perhaps… Anyway, as every year video streams of most sessions are available in real time and for download shortly thereafter so I will be able to at least watch the sessions I am interested in at most remotely. Take a look at the schedule, the afternoon of the first day (27th December) is especially interesting from a mobile network and device point of view with presentations of Sylvain Munaut, Tobias Engel and Karsten Nohl. Links to the video streams can be found here.

You Can’t Hide Your Location From Google With A VPN

Observable-wifis-smHere's an interesting observation I recently made when I used a VPN in a hotel and came across a website that asked for my location details in the browser. I was confident Firefox would not be able to find out where I was as I used a VPN tunnel to my gateway in Paris. I thus pressed the 'yes' button, expecting that the website would then tell me that I'm in Paris. Much to my surprise, however, it came up with my exact location. How is that possible, I thought, my IP address points to my VPN server in Paris!?

A detailed answer can be found on Firefox's Geolocation info web page here. In addition to the IP address, Firefox also gets the list of nearby Wi-Fi access points and sends that to Google's location server. At the location there were only two Wi-Fi access points in addition to my own as shown in the screenshot on the left but that's enough for Google to locate me.

Incredible on the one hand and scary on the other. It's no problem in this case as Firefox asked me for permission before sending the data to Google and the web page. But it shows how easy 'others' can pinpoint your location if they manage to get a little piece of software on any connected device you carry that has a Wi-Fi interface.

Socks and (Raspberry) Pis for Christmas

I like personal gifts for Christmas and very much appreciate self knitted socks and other self-made things. Personally, I have to admit that handcraft is not a strength of mine so I have to resort to other things. This year I think I might have the perfect personal gift, however! I can't knit socks and pullovers but I've decided to put an BananaPi based Owncloud server together for the family and configure their smartphones to talk to that server instead of Google. That should be the equivalent of least three pairs of hand made socks 🙂

Digging Is The Expensive Part – Not The Fiber

Back in the early 1980s, telecommunication was a state monopoly in pretty much all countries all over the world. Privatization in the 1990's and the resulting competition gave an incredible boost to the industry. Today we enjoy incredibly fast networks in many places, both fixed and wireless, and there is no sign that the increase in bandwidth requirements is slowing down anytime soon. We have come to a point, however, where the last mile infrastructure we have used in the last 25 years has come to its limits. Further evolution, both fixed and wireless, requires fiber links that do not only reach up to the buildings but right into the homes. The problem is, who's going to pay for it and what impact does it have on competition?

As I've ranted previously, the company that puts a fiber into peoples homes will become the telecom monopolist of the future. So while in some countries such as France, telecom companies are rushing to put fiber into the ground to be the first, companies in other countries like Germany are lacking behind. And even in France, fiber lines are mostly installed in densely populated areas, leaving more rural areas again at a disadvantage. The reason obviously is that it is expensive to put new fiber cables into homes. The point however, is, that it's not the fiber that is expensive, it's digging the trenches and the in-house installation that is required for the new connection. But why should the telecoms companies actually have to pay for the digging?

Let's have a look at roads (for cars) for example. These are built by the state, the country or the city with taxpayer money. It's critical infrastructure and so it makes sense. Telecommunication networks are also a critical infrastructure used by everyone and I guess we all agree we don't want to go back to state monopolies in this area. But how about using taxpayer's money to do the digging and put in empty tubes through which telecoms companies can then lay their fiber cables? This would give a huge boost to the digital economy and at the same time it would restore a degree of competition as it would perhaps suddenly make economical sense again to lay several fibers to a building and give people a choice again which infrastructure they want to use.

I know, I'm dreaming as this is a political decision that has not been made so far and I don't see any indication of something like that happening in the future. But one can still dream…

 

 

Upgrading Ubuntu With Minimal Downtime And A Fallback Option

When it comes to my notebook that I use around 25 hours per day I'm in a bit of a predicament. On the one hand it must be stable and ultra reliable. That means I don't install software on it I don't really need and resort to virtual machines to do such things. On the other hand, however, I also like new features of the OS which means I had to upgrade my Ubuntu 12.04 LTS to 14.04 LTS at some point. But how can that be done with minimal downtime and without running the risk of embarking on lengthy fixing sessions after the upgrade and potentially having to find workarounds for things that don't work anymore!?

When I recently upgraded from a 512 GB SSD to a 1 TB SSD and got rid of my Truecrypt partitions a few weeks ago I laid the foundation for just such a pain free OS update. The cornerstone was to have an OS partition that is separate from the data partition. This way, I was now able to quickly create a backup of the OS partition with Clonezilla and restore the backup to a spare hard drive in a spare computer. And thanks to Ubuntu, the clone of my OS partition runs perfectly even on different hardware. And quick in this case really means quick. While my OS partition has a size of 120 GB, only 15 GB is used so the backup takes around 12 minutes. In other words, the downtime of my notebook at this point for the upgrade was 12 minutes. Restoring the backup on the other PC took around 8 minutes.

On this separate PC I could then upgrade my cloned OS partition to Ubuntu 14.04, sort out small itches and ensure that everything is still working. As expected, a couple of things broke. My MoinMoin Wiki installation got a bit messed up in the process, Wi-Fi suspend/resume with my access point also got a bit bruised but everything else worked just as it should.

Once I was satisfied that everything was working as it should I used Clonezilla again to create a backup of the cloned OS partition and then restored this to my production notebook. Another 12 minute outage plus an additional 3 minutes to restore the boot loader with a "Boot Repair" USB stick as my older Clonezilla version could not restore a Ubuntu 14.04 Grub boot loader installation after the restore process.

And that's it, Ubuntu 14.04 is now up and running on my production PC with as little as two 12 minute outages. In addition, I could try everything at length before I committed the upgrade and I still have the backup of the 12.04 installation that I could restore in 12 minutes should the worst happen and I discover a showstopper down the road.

So was it worth all the hassle other than being able to boast that I have 14.04 up and running now? Yes I think it has and here's a list of things that I have significantly improved for my everyday use:

  • Video playpack is smoother now (no occasional vertial shear anymore)
  • The dock shows names of all LibreOffice Documents now
  • Newer Virtualbox, seems to be faster (graphics, windows, etc.)
  • MTP of more phones recognized
  • Can be booted with external monitor connected without issues
  • Nicer fonts in Wine Apps (Word, etc.)
  • Nicer animations/lock screen
  • Updated Libreoffice, improved .doc and .docx support
  • The 5 years support period starts from 2014
  • Better position to upgrade in 2 years to 16.04
  • Menus in header save space
  • VLC has more graphical elements now

Walking Down Memory Lane – 10 Years Ago, My First 3G Mobile

V800-1Is 10 years a long or a short timeframe? Depends, and when I think back to my first UMTS mobile that I bought 10 years ago on this day (I checked), the timeframe seems both long and short at the same time. It seems like eternity from an image quality point of view as is pretty much visible in the first picture on the left which is the first picture I took with my first UMTS phone, a Sony Ericsson V800 – Vodafone edition. Some of you might see another UMTS phone on the table, a Nokia 6630 which was a company phone so that doesn't count.

On the other hand, 10 years is not such a long time when you think about how far the mobile industry has come since. Back in 2004 I had trouble finding UMTS network coverage as mostly only bigger cities (population > 500.000 people perhaps) had 3G coverage at the time. Back in 2004, that first UMTS phone was still limited to 384 kbit/s, no HSDPA, no dual-carrier, just a plain DCH. But it was furiously fast for the time, the color display was so much better than anything I had before and the rotating camera in the hinge was a real design highlight. Today, 10 years later, there's almost nationwide 3G and even better LTE coverage, speeds in the double digit megabit/s range are common and screen size, UI speed, storage capacity and camera capabilities are orders of magnitude better than at that time.

Even more amazing is that at the time, people in 3GPP were already thinking about the next step. HSDPA was not yet deployed in 2004 but already standardized and meetings were already held to define the LTE we are using today. Just to get you in the mindset of 2004, here are two statements from the September 2004 "Long Term Evolution" meeting in Toronto Canada:

  • Bring your Wi-Fi cards
  • GSM is available in Toronto

In other words, built-in Wi-Fi connectivity in notebooks was not yet the norm and it was still not certain to get GSM coverage in places were 3GPP went. Note, it was GSM, not even UMTS…

I was certainly by no means a technology laggard at the time, so I can very well imagine that many delegates attending the Long Term Evolution meeting in 2004 still had a GSM-only device that could do voice and sms, but not much more. And still, they were laying the groundwork for LTE that was so far away from the reality at the time that it almost seems like a miracle.

3-generations-mobileI close for today with the second image on the left, that shows my first privately owned GSM phone from 1999, a Bosch 738, my first UMTS phone from 2004 and my first LTE phone, a Samsung Galaxy S4 from 2014 (again, I had LTE devices for/from work before but this is the first LTE device I bought for private use). 15 years of mobile development side by side.