98% Wireless Broadband Coverage A Requirement After The German Spectrum Auction

As reported in the previous post, another spectrum auction has started in Germany this week, this time with only 3 companies being allowed to bid for the spectrum. Two things make this spectrum auction especially interesting for me. First, all of the GSM 900 spectrum and quite a bit of the GSM 1800 spectrum is re-auctioned as the licenses awarded a decade or two ago are expiring. So it's going to be interesting to see who wants to acquire how much spectrum in the pretty narrow GSM 900 band that is not very suited for broadband Internet services because it will have to be used for continuing the GSM narrow-band service there for the foreseeable future. The 1800 MHz band is a different beast as it's broad enough for high speed Internet services of several network operators and already used in Europe for that purpose in addition to GSM.

The second interesting thing for me in this auction is that the German Regulator (BNetzA) requires each company that acquires spectrum in the new 700 MHz (digital dividend 2) band to cover 98% of the population with their mobile broadband Internet service with a speed of at least 10 MBit/s per customer on average, and 50 Mbit/s per sector of a base station. The later requirement means that at least 10 MHz of spectrum has to be used per sector. For the details have a look at the 250+ page rules and requirements document for the auction.

Today, we are still quite a bit away from that goal. According to the regulator's report for 2014 that was published a couple of days ago, 92% of the population is now covered by at least one LTE network and the market leader's LTE network covers 80% of the population. According to the auction rules, EACH network has to cover 98% of the population in three years time, however, so from that point of view there's still some work to be done.

And finally I also find it quite interesting that the rules also go into the details of what statistics each network operator has to annually deliver to the regulator, including the requirements to supply SIM cards and methods for the regulator to make their own assessment how well each network is deployed.

German Spectrum Auction 2015 Started – Online Sources

Yesterday, the 2015 spectrum auction for wireless network operators has started in Germany. In addition to the re-auctioning of spectrum in the 900 and 1800 MHz band due to decade old licenses expiring, new spectrum is auctioned in the 700 MHz band (Digital Dividend 2) and some extra uni-directional spectrum in the 1500 MHz band. Hopefully, having only 3 companies bidding for the spectrum will not drive up the auction results to unreasonable levels as in the past. Anyway, Teltarif has published a good report about the results of the first day here and they'll probably follow the proceedings and comment on a regular basis. Worth watching in case you are interested in spectrum auctions. Their posts are in German but Google can help with the translation… And in case you are wondering about the T&C's of the auction, they can be found here, again, unfortunately in German only.

We Are Past The “Human Subscription Peak” As Well

A couple of days ago I started my analysis of this year's report of the German telecoms regulator (RegTP) with a first post on that we are past "peak telephony". The report also clearly lays out that we are past the "human subscription peak" as well.

10 years ago in 2004 there were 89 million mobile subscriptions in Germany. The peak was seen back in 2011 with 142 million subscriptions. Since then the number of subscriptions have gone up and down a few millions year on year and in 2014, 139 million subscriptions were counted. In other words there is no growth anymore despite the push for mobile devices in addition to smartphones such as a tablets that also have cellular connectivity.

So should we see growth in this area in the future again it will probably come from other areas. Machine to machine for example. In other words the number of SIM cards might from now on be a good indicator of how much and how fast non-human machine communication gains traction.

Past the “Peak Telephony” In Germany

Recently, Dean Bubbley has written an interesting blog post about how most industrial nations are beyond “peak telephony”, i.e. the number of voice minutes in fixed line and mobile networks combined is decreasing. When the German regulator published its report for 2014 a couple of days ago I had a closer look here as well to see what the situation is in Germany. And indeed, we are clearly past peak telephony as well.

And here are the numbers:

In 2014, fixed line networks saw 154 billion outgoing minutes in Germany which is 9 billion minutes less than last year. On the mobile side they've been observing an increase of 1 billion minutes. In total that's 8 billion minutes less than the previous year, which is about -3%. The trend has been going on for quite a while now. In 2010, combined fixed and mobile outgoing voice minutes were at 295 billion compared to 265 minutes in 2014. That's 11% less over that time frame.

A question the numbers can't answer is where those voice minutes have gone. Have they been replaced by the ever growing traffic of instant messaging apps such as Whatsapp or have the been replaced by Internet based IP voice and video telephony such as Skype? I'd speculate that it's probably both to a similar degree.

Skype Still Supports Linux – But I Got Rid Of It On The PC Anyway

Despite my fears last year that Skype that is owned by Microsoft these days might cease to support PC Linux at some point and leave me stranded, it hasn't happen yet. Last year I speculated that should this happen I would probably just move Skype to an Android tablet and be done with it. As I remarked at the time this would have the additional benefit that I would reduce exposure of my private data to non-open source programs as an added benefit. Between then and now I went ahead and tried out using Skype on a tablet and a smartphone despite its ongoing support for Linux on PCs and found that it's even nicer to use on these platforms than on the PC. During video calls I can even walk around now without cutting multiple cords first. And I have the added benefit that there's no exposure to my private information anymore by a non-open source program as I otherwise only use that tablet for ebook reading. I'm glad tablets have become so cheap that one can have several of them each dedicated to a few sepcific purposes. That ties in well with my thoughts on the Macbook 2015 becoming the link between Mobiles and Notebooks.

My Gigabit/s Fiber in Paris Is Already Outdated – Say Hello to 10 and 40 Gbit/s PON

Since I know how a gigabit GPON fiber link feels and performs and that it's deployed significantly in some countries I can't but wonder when telecom operators in other countries stop praising DSL vectoring with 100 Mbit/s downlink and a few Mbit/s in the uplink as the future technology and become serious about fixed line optical network development!? Having said that I recently noticed that the Gigabit Passive Optical Network (GPON) I have in Paris with a line rate of 1 Gbit/s is actually quite out of date already.

10G-PON, already specified since 2010, is the successor technology and, as the abbreviation suggests, offers a line rate of 10 Gbit/s. According to the Wikipedia article that line rate can be shared by up to 128 users. And thankfully, PON networks are upgradable to 10G-PON as the fiber cable is reused by changing the ONT. Backwards compatibility is ensured as 10G-PON uses a different wavelength compared to GPON so both can coexist on the same fiber strand thus allowing a gradual update of subscribers by first changing the optical equipment in the distribution cabinet and subsequently the fiber devices in people's homes.

But that's not all as standardization of the successor to the successor is already full swing. NG-PON2 is the new kid on the block and will offer 40 Gbit/s downlink speeds on several wavelengths over a single fiber cable and 10 Gbit/s in the uplink direction. For details have a look at the ITU G.989.1 document that contains the requirements specification and G.989.2 for the physical layer specification.

So who's still talking about a measly 100 Mbit/s in the downlink?

5G -Separating Use Cases From Technical Debates

For first and second generation mobile networks the use case was pretty simple: Let's make voice telephony wireless and mobile. For third and fourth generation networks the use case was to mobilize the Internet. Perhaps it's only in retrospect but these use cases are pretty easy to grasp. On the other hand I can still remember the 'search for a killer app' for 3G networks that went on for many years. I'm not sure if it was ever found as that search was done in mindset that the killer app should come from network operators when in reality the 'killer app', as far as I'm concerned, was to mobilize the Internet as a whole. So what about 5G then?

Compared to the discussion that was taking place around 3G (UMTS) and 4G (LTE) at the time the discussion on what 5G will be and why we need it is too hazy for me as lots of more or less realistic use cases are discussed while the discussion on how 5G will actually work is more or less done in the background. Stephen Temple over on his web site suggests to split the 5G discussion into a use case debate and a technical debate. A good idea in the light that most of the network operator centric use cases discussed at the time for 3G and 4G were never realized they way they were discussed (e.g. IMS as a universal service platform). He has a number of very interesting thoughts on the technical side, including the potential non-regulation of spectrum above 5 GHz and close range wireless-fiber networks as technical corner stones of 5G.

C64 History: Chuck Peddle Amp Hour Podcast

Being a bit of a history buff (e.g. see my article on 'C64 Vintage and Virtual Hardware For Exploring The Past') I stumbled over a recent podcast the Amp Hour did with Chuck Peddle. If the name doesn't sound familiar it could be because main stream media often portrays the 1980s and 1990s as an epic struggle between Apple, Microsoft and IBM. This is perhaps because all three companies still exist today but the story is a lot bigger than that as companies such as Commodore and Atari and home computers like the C64 play a big part in that revolution as well. In the Amp Hour interview with Chuck Peddle, the leader of the team that designed the 6502 processor that would make home computing in the 1980s affordable for the masses, goes back to the times before and after the C64 and tells history from his point of view.

Peddle says that while Apple built in style and IBM for business, Commodore built for the masses. I more than agree with this statement as the C64 was the only home computer my parents could afford to buy me as a kid. Both Apple and IBM played in a totally different league from a pricing point of view. So if you want to spend a good time hearing about history, lean back and enjoy that podcast. And if you want to learn more, Brian Bagnall's 'Commodore – A Company On The Edge' is a great source for additional details and stories about the 1980s and 90s in computing.

The 2015 Macbook Is The Link Between Mobiles And Notebooks

The 2015 Macbook is certainly not a product that could replace my productivity notebook. With only a single connector for power and connectivity it utterly disqualifies for my usage scenario where 3 USB ports, a single external screen connector and a single Ethernet port I have on my current notebook is often not enough. But while that is so that device is a first of its kind because it's a product that bridges the gap between smartphones/tablets on the one side and notebooks/PCs on the other.

Have a look at the iFixit tear down and you see what I mean. The motherboard is only slightly bigger than what you find in a tablet today and certainly doesn't look like a traditional notebook motherboard anymore. But the whole setup is strong enough to run a "full" operating system and not a stripped down version such as Android or iOS. If the screen was touch sensitive that device would actually be a tablet with a built in keyboard with a full desktop operating system rather than a notebook.

So what began in the mobile space when the Linux/BSD kernel replaced operating systems that were developed far away from the desktop world in 2007/2008 with the first Android and iOS devices has now extended to the overall operating system itself.

Linux Device Drivers – Exploring the Kernel And Reviewing A Book 10 Years Later

Apart from mobile, I like to explore other computer topics every now and then as there are often surprising ideas springing up from this that also impact my work in mobile. Android programming, Raspberry Pi's, Owncloud and my dive into PHP web programming and databases last year immediately filtered back to the work that earns the daily bread. Earlier this year I started having a closer look (again) at the Linux kernel.

The great thing about the Linux kernel is that it's open source and you can have a look yourself. The problem is, however, that it's a monumental piece of code and without any prior kernel knowledge, diving into the material seems daunting. I'm a hands-on person so just going through the code for the fun of it is not my piece of cake. So I was looking for some insight with practical things to be done along the way. That's not easy to come by as books on the topic with hands-on tutorials are rather dated. One of the best books on the topic is perhaps the "Linux Device Drivers" book as it explains many things about the kernel from a device driver perspective. The good thing about this approach is that it offers hand-on experience with a couple of sample drivers one can compile, modify and run. 

Unfortunately the current 3rd edition of the book is from 2005. Ancient history… I would have never bought it in print at first. Fortunately, it is available online free of charge and so I decided to start reading it in electronic form first and see if the sample code would still run. The source code would of course not compile anymore, too many changes were made to the kernel since. But a number of people have updated the source over time and there's a working version available that compiles with current kernels on Github by duxing2007.

Together with the working source the book suddenly made a lot more sense and even though some of the book's content is clearly dated (e.g. using the parallel port for some of the sample code or discussing the ISA bus) the majority of the content still gives a good introduction to the kernel with lots of things to try out via the driver examples that one can compile, run and modify. At some point I decided it was worth to buy the print version of the book as sometimes information in print still beats the electronic version. In other words, despite the book being 10 years old now, I still found it a worthwhile read!

While not necessary to compile and run the examples in the book, having the kernel source to explore is great. As it turns out it's quite simple to download and even compile it. If you are running Ubuntu, have a look here for how to do that. On my notebook in a virtual machine running an Ubuntu guest, it takes around 2.5 hours to compile the kernel. Installing the compiled kernel to boot from is a simple command. I wouldn't have dared that on my notebook but in a virtual machine there's nothing you can break in the process that couldn't be restored with the click of a button that restores a previous snapshot of the guest OS.

With all these things in place it's never been easier to explore the kernel! Have fun!