When I recently bought a SIM card for the first mobile phone of my 10 year old nephew I was positively surprised that unlike most other prepaid offers I have seen for a long time, packet switched network access is enabled by default but all attempts to browse the web are redirected to a landing page from which a tarrif has to be selected before Internet access is granted and the prepaid account is charged. This is great as even if he should accidentally activate Internet access while browsing through the menu structure and playing with the device it will still be free of charge until an option from the landing page has been selected. No accidental activation and subsequent charing, I hope that is something I will see more often in the future on prepaid offers!
Author: Martin
GSM Switch-Off: AT&T Targets 2017
Yes, I know NTT-DoCoMo has long shut down their 2G network but that was a special case as it was their proprietary technology little used anywhere else. Since then there have been rumors, speculations and analysis when network operators in other countries in the world might switch-off their more popular and wide spread 2G GSM networks. Now AT&T has given a date for their US GSM network shutdown, it's envisaged for 2017 as reported by the Wall Street Journal.
2017, that's 5 years from now. I've noticed AT&T making a lot of progress of deploying UMTS in remote areas and 5 years is enough time to continue the process in addition to rolling out LTE. Also, when I was recently in Canada, I was positively surprised about the 3G coverage along highways in sparsely populated areas between cities. On 850 MHz, the coverage area of a UMTS cell is similar to that of a GSM cell and for carriers that quit CDMA in the past to go to UMTS it obviously did not make sense to deploy GSM alongside.
5 Years ago, back in 2007 I had a post on this blog about when GSM will be switched-off. Let's take a look what I thought at that time and how that matches today's situation and AT&T's announcement:
"So what are we going to see in Europe by 2012 then? In five years from now [i.e. 2012] I expect the majority of subscribers in Europe to have a 3G compatible phone that is backwards compatible to 2G. "
[Yes, right on the mark, more than half of the phones sold today are smartphones and even feature phones have 3G included now, too. There are few models now to be found in shops that are only GSM.]
In urban areas, operators might decide do downscale their GSM deployment a bit as most people now use the 3G instead of the 2G network for voice calls. Cities will still be covered by GSM but maybe with fewer number of available channels / bandwidth.
[Mostly on the mark: While for many years people have switched off 3G in their phones for fear of higher battery power consumption and thus made most of their voice calls on 2G, that's a thing of the past in 2012. Accessing services on the Internet from smartphones has become a mass market trend. As a consequence, most voice calls from such phones are now established over 3G networks. In the UK, O2 has deployed UMTS 900 in London. It's still a bit of an exception in Europe. O2 in the UK is in the fortunate position of owning half of the 900 MHz band so it could easily carve out 5 MHz and put a UMTS channel there. There are no announcements of similar intentions by other European network operators for the moment. However, with voice calls migrating to 3G due to the use of smartphones I think this will not remain the only major urban deployment of UMTS 900 in Europe.]
"Such a scenario could come in combination with yet another equipment refresh which some operators require by then for both their 2G and 3G networks. At that time, base station equipment that integrates 2G, 3G and beyond 3G radios such as LTE could become very attractive. The motto of the hour could be "Replace your aging 2G and 3G equipment with a new base station that can do both plus LTE on top!"
[Yes, that's what we see today when new network equipment is being rolled out. Huawei, for example, calls it Single RAN and NSN's Flexi concept goes in the same direction]
"I wonder if it is possible by then to only use one set of antennas for all three radio technologies!? If not, adding yet another set of antennas on top of an already crowded mast is not simple from both a technological and psychological point of view."
[Today, at least GSM and UMTS use the same antenna but I haven't yet seen what kind of antennas are used at base station sites at which GSM, UMTS and LTE are deployed, all in very different frequency bands. Single antenna solutions exist, even in variants that have several antennas in a single casing, as for example demonstrated by Kathrein at the Mobile World Congress in 2011].
When looking at all of these developments I think it is very likely that we will see a lot of movement around what kind of technology is used in the 900 MHz band in Europe. In many countries, licenses for the 900 MHz spectrum will be renewed, reassigned or re-auctioned in this time frame and in many countries auctions for the 800 MHz digital dividend band and the 2600 MHz band for LTE have not yet been undertaken. All of this will have a significant impact on what network operators will do with their 900 MHz spectrum assets. My prediction is that GSM will still be around in Europe in 2017 but the debate on when to switch it off will be in full swing. I've described how such a phaseout could look like in a post on 'GSM Phaseout Scenarios'. Despite written in 2008 I think it still applies from today's perspective.
Traffic Shaping When Using more than 60 Gbytes a day
Yesterday, I reported on the cost of a terabyte of data volume as that number gives a good idea of how much money is involved in transferring data, at least from a central place. It's quite low. Now here's another interesting number (sorry, the link to a German resource) that can help in the debate around file sharing and how much active file sharers use the network compare to average consumption.
In the post linked above, Kabel Deutschland (a German cable network operator) says that they only use throttling when file sharing and one-click hosting traffic of a user exceeds 60 gigabyes pe day. Wow, that's more than what I use without file sharing but with lots of online video rental and streaming in a whole month. 60 gigabytes times 30 days, that's 1.8 terabytes a month.
On the other hand, their terms and conditions state that they reserve themselves the right to throttle file sharing and one-click hosting traffic after 10 gigabytes a day. Hm, sometimes I would come close when I download a couple of weeks of recordings from my online video recording service.
According to the post only 0.1% of their customers create such kind of traffic. And here are further interesting numbers from that post:
- 15 per cent of the users create 80% of the traffic in the network in the downlink direction. Unfortunately, they don't say how much traffic those 0.1% of users with 60 gigabytes or more generate. That would be a much more interesting number because I am not impressed by the first number as there will always be a large customer base that only use Internet connectivity only little and still pay the full price of the line.
- In the uplink direction 5 per cent of the users generate 80% of the traffic. Again, the overall amount of traffic of those 0.1% of heavy file sharing users would be an even interesting number.
One can think about those numbers in different ways but traffic shaping vs. net neutrality remains a hot topic. Personally I think I am somewhere in the middle of this debate, being well aware that this middle ground is a very slippery place to be. It get's a little bit less slippery if network operators are up front on the topic and state their T&C's around throttling quite clearly and don't hide them in the fine print. Then it's up for customers to decide when they have all the facts to perhaps choose lower prices when some sort of throttling is applied when they hit some limits vs. something more expensive when this is not done.
6 Pounds for a Terabyte of Data Volume
Back in January I did a quick analysis of current prices for IP transit data here because I continue to be amazed that some DSL providers keep threatening to throttle or traffic shape users with above average monthly data consumption [in the hundreds of GB range]. With IP transit prices being as low as they are today I wonder if there is really a significant financial reason for that?
Anyway, today I came across another interesting number: A server hosting service provider in the UK includes 10 TB of data volume over a 100 MBit/s link per month in a 30 pounds hosting package. After that the line rate is reduced to 10 MBit/s and any extra data still remains free of charge. Note that the 30 pounds are not only for the data volume but includes a high end 4 core CPU, 16 GB RAM and 2x 3 TB hard drive RAID, power to run this beast 24/7 in the package. If after the 10 TB you still want the full 100 MBit/s line speed, you pay 6 pounds per Terabyte extra.
In other words, a couple of hundred Gigabytes is nothing for them…
When Even Hackers Don’t Want To Connect To the Network
One thing that frequently pops up in reports about hacker conferences such as the annual CCC conference in Berlin or Defcon in Las Vegas is that many of the hackers present there are very reluctant to connect their PCs to any network there for the fear of being hacked. While this might sound like it is a sensible precaution I think this is rather worrisome.
These are the people that know best how to protect themselves and how to set up their equipment securely. And yet even these people feel they can not securely connect to the network. So what's the differences between the networks at those conferences and using networks in other places? Only the number of hackers present. But if it's not secure at those conferences it is no less secure to use networks anywhere else.
Not much trust in our computing and network infrastructure by those who know what is possible and what can be done. Makes one think…
The First Mobile – Lock Some Things
There we go, my nephew will go to highschool after the summer break an it was about time for a mobile phone for him. Obviously there are two general choices. Either a dumb phone that can only be used for voice calls and text messaging or a smartphone that pretty much opens up the world. There's not much in between the two extremes.
The first choice would probably not bring him very far and he himself was quite insistent that he wants to play games. Quite understandable for a 10 year old. Se we finally decided to go for a smartphone. But for the moment he hasn't used the Internet much and he is perhaps still a bit too young to act responsively, to fall into the first trap that he comes across on a web page or finds stuff that is not for his age. So what to do?
From a pricing point of view, Android phones have become very affordable and if he breaks a 100 euro device, which kids tend to do, it's not the end of the world. Also, there are about a gazillion app locker apps in the Android market that just seem to do what I want: Lock access to apps that should be off limits for the moment. That includes the web browser and the Android app market. After playing around a bit I selected App Locker as it worked quite well on my device and protects app execution with a password. The browser, Google maps and the app store are now protected.
One could also block access to settings to ensure the device is not reset to get rid of the app locker or to prevent the deactivation of the side loading blocker to install an alternative browser. But perhaps once he figures that out, he's old enough for the Internet anyway…
One thing I didn't like: The app requires full Internet access rights and wants to read the phone state and ID. Not quite what I have in mind as a privacy concerned person but at least it doesn't want to access the address book and other sensitive private information which would be totally unacceptable. Also, the device is not connected to the Internet anyway as Wi-Fi and packet data is turned off. And once it gets connected to the Internet the app can be deleted anyway.
The Fn Key At the Wrong Place And How It Helped Virtualization
Another of my confusing titles for a post but this little story is too and shows how beneficial it is to sometimes have a look left and right from the beaten path.
I recently bought a new sub-notebook from Lenovo and was quite irritated that the Fn function key is were the STRG key is supposed to be. It drove me crazy as I use the STRG key quite often and always hit the Fn key instead. Luckily, Lenovo is aware of its blunder and offers an option in the BIOS to switch the Fn with the STRG key. While I was doing that I had a look around what other options the BIOS offers and I stumbled over a page at the time that had a couple of options for CPU virtualization support that were all disabled. At the time I just noticed it but since I didn't see a need to virtualize anything I left the options disabled.
Then a week or so later I turned to virtualization to have a Virtualbox VM with Windows XP in it to run some networking tools that are not available under Ubuntu. Runs like a charm as reported. Setting up Ubuntu in another client container, however, did not work as smoothly, the experience was very slow and the user interface not very responsive. I tried a lot of things to improve it, more memory, more graphics RAM, etc. etc. but nothing would help.
Then, while relaxing a bit I suddenly remembered the 'virtualization' settings in the BIOS. I rebooted the computer, activated the settings in the BIOS and started Ubuntu in the virtual machine again. And voila, it suddenly ran like a charm. The speed of Ubuntu running in the virtual machine is now pretty much the same as the Ubuntu host system. Incredible! I wonder why the Windows XP client doesn't need it!? Perhaps too old 🙂
So the Fn key at the wrong place actually helped me to get this totally unrelated problem solved. This, and of course that curiosity that made me have a look at the other options in the BIOS at the time. Sometimes I am amazed how the brain works.
Going to Court over Spectrum – A Never Ending Story
When it comes to LTE, the UK has significantly fallen behind compared to other countries in Europe that have long auctioned or assigned their spectrum for LTE in the 800 and 2600 MHz ranges. One major issue in the UK is that whenever rules for the auction were discussed there was always someone who was unhappy and threatened to go to court over the matter.
Recently, Ofcom has unveiled yet another set of rules and again the threat of going to court is being raised. But seriously, with at least four incumbent contenders for spectrum each having quite a different amount of spectrum in different parts of the frequency range I think it's almost inevitable someone will be unhappy. And even if it is only for the sake of slowing down the process because one company or another doesn't want to invest and tries to prevent others from doing so, either.
But it has been been like that forever. While four UK companies are battling over 250 MHz worth of spectrum today, court threats have been in the room even back in the 1980's when a battle ranged over just 3 MHz of spectrum between Vodafone and BT. For all the juicy details have a look at 'GSM History' by Stephen Temple where he lays it out in minute detail in Chapter 14.
If someone went to court over the matter, the UK would be in good company. In other countries, such as for example Germany, court actions over 800 MHz spectrum were also no exception. Half a year before the 'LTE' auction in Germany, incumbents O2 and E-Plus went to court over the matter. For details (in German) see here and here. In the end their complaints were dismissed and the auction took place.
Hopefully with a couple of hundred pages to justify the rules of the UK auction, the same will happen in the UK as another delay to the auction process will throw back the UK even more than the three years they are already late to LTE. Sometimes I wonder how this could all have happened!? Back in the 1980's the UK was the first European country to liberalize the telecoms market with stunningly positive effects. 30 years later they have not only lost the lead but are hopelessly behind most other countries in Europe. Perhaps it's time to settle this in court and get on with it.
Going Virtual For Network Testing
One of the small disadvantages of using Ubuntu on the notebook that I carry when traveling is that some software, e.g. for network testing so far required a reboot to Windows. On my new notebook, I don't even have Windows installed natively even more so even that option, which I rarely used due to the hassle involved, is no longer available. But now there's a remedy!
When I tried to run Windows in a virtual machine (Virtualbox) on an Ubuntu host a few years ago it would work quite well but I had difficulties mapping USB hardware such as for example 3G dongles directly into the virtual machine to make them accessible to the software there. But it seems USB pass through has been significantly advanced since then. When trying again a couple of days ago I noticed that USB pass through for a number of USB 3G dongles and other USB devices from the Ubuntu host to Windows in the virtual machine now works like a charm.
USB 3G dongles recognized by Ubuntu and included in the network manager as a WAN network adapter are automatically detached when a Virtual Machine is started in which it's USB ID is configured for pass-through and the native Windows drivers immediately pick up the hardware. When the virtual Ethernet network interface in the Windows guest is deactivated its even possible to establish an Internet connection over the 3G stick and the guest machine becomes completely independent of the host's Internet connectivity. That in itself offers some interesting options for network testing.
Another cool feature of using a virtual machine for testing is the ability to take a snapshot of the OS in the machine before installing new software. Once done with the testing one click removes all changes. This prevents bloating and unwanted side effects drivers and software of different products can have on each other.
And for final comfort, the 'full screen mode' lets you forget quickly that you are working in a virtual machine while the 'seamless mode' runs program windows of the host and guest OS together on the host's desktop.
How nice!
Hollow Operator Service Deals Are Not Necessarily a Cash Cow
Back in 2009, I first reported on a growing trend of network operators outsourcing their network operation to third party companies, thus in effect becoming "Hollow Operators". The main drive for some network operators to doing so was to reduce their costs. With companies like Alcatel Lucent, Ericsson, Huawei, NSN and others eager to pick up such deals, competition must have brought down prices quite a bit, so perhaps they even got that.
But now it seems that many of these deals have not been very profitable for some of the service companies. Light Reading reports that Alcatel Lucent wants to get rid of or re-negotiate 25% of their managed service deals as they are not profitable. Other companies also seem to have a difficult time with some of their service contracts as the report notes.
By now, however, it's likely that a significant amount of local talent previously operating a network is gone, long having been replaced by supposedly cheaper labor in far away countries. Best of luck to those hollow operators who are now faced to re-integrate their network operation or to negotiate a service deal with someone else who will not take over resources not working locally anymore. Another choice is to obviously pay a higher price for ALU to continue the operation.
I wonder how that new price now compares with how much network operation cost while it was still done in-house including the speed, flexibility and control the company had while doing so!? It was not too long ago, so perahaps someone will still remember and have a metric or two.