Youtube Data Rates To Smartphones

Back in December I reported on some tests I ran to determine the data rates used by Youtube for streaming videos at different resolutions. The result was that a 30 second input file generated with a Nokia N8 of around 45 MB in 720p quality was converted by Youtube into 2.7 MBit/s stream (23 MB total) for a 720p HD stream and to a 1.2 MBit/s stream for the 480p resolution. At the time I assumed that those streams are also used on mobile devices, especially for the new smartphones with a dedicated Youtube client that offer a quite nice looking "HQ" streaming from Youtube.

Recently I revisted the topic and decicded that seeing is better than believing and to trace what was actually going on. To see how the videos are requested I used a Wi-Fi access point and a PC as a gateway to the Internet on which I could run Wireshark to see what is going on. I used three Android based smartphones from three different manufacturers which each had the Google Youtube client installed by default. All of them took the "HQ" (note it's not "HD", it's "HQ", a fine difference…) version of my original video which actually turned out to be in a resolution of 640×360 pixel (i.e. 360p), which is lower than the standard quality for the PC which is 480p. At 30 frames per second the video is streamed at 0.7 MBit/s which translates in about 2.7 MB of data for a 30 second video clip.

By the way: This version of the video can be watched on the PC as well, e.g. with VLC. With Wireshark, the URL of the stream can be copy and pasted over to the web browser which will then download the stream into a file. That file can then be played back and examined with VLC.

0.7 MBit/s is roughly half of the streaming rate of the standard PC resolution and much easier to achieve in life networks under less than ideal coverage conditions compared to the standard or HD resolution streams. Nevertheless, the videos still look very good, even if they need to be upscaled a little bit for current smartphone displays. The Samsung Galaxy S and S-II for example have a screen resolution of 800×480 pixel, almost big enough for the standard Youtube PC resolution of 854×480 pixel.

 

I Am Ready For A New Netbook But…

… they are not much better then the one I currently have that is three years old. Can it really be, three years is an eternity in computing!? Look what happened in the mobile domain in the last three years and compare.

Whenever I look at the latest netbook models, they still have a slow Atom processor with a built in slow graphics adapter. I don't mind that the 1GB of RAM hasn't advanced, my Ubuntu is quite happy with that. But I'd really like the CPU and graphics to be a bit faster. Three years is a long time for things not to improve all that much.

True, Ultrabooks are coming to the market now so perhaps they are something for me. One thing I have second thoughts about is how I can replace the built in battery myself? On all of my previous notebooks and netbooks the battery had to be replaced a year or a year and a half as their capacity was not longer sufficient for my use. Also, more than 11" is no good for me either, it just has to be that small so I can work with it in the train. Any more and it stays in the bag.

My Changing Needs for Connectivity

Perhaps its because I am getting older, I'm not sure, but my needs for and use of connectivity have notably changed. Not that I've changed my mind on wanting network coverage wherever I go, no, it's what I want it for that has changed over time. I can still remember the early days of wireless when I had a mobile phone and took it everywhere so I could be reached anytime. Once Internet access went mobile, that was extended by the desire to be reachable by email and other services at any time as well.

Fast forward to 2012 and I see a remarkable difference. Today, I no longer have the desire to be reachable anywhere and anytime. For phone calls and SMS messages I have filters. If I want to be undisturbed, I activate the filter and restrict the audible indication of phone calls and SMS messages to a few persons for which I really want to be reachable at any time. Other incoming stuff can be dealt with later. Same thing with emails.

Unlike other people I don't have a bad conscience when somebody asks me why I wasn't reachable to say that I was busy. There were times when the email client on the phone ran 24/7. Perhaps spam and emails not requiring and instant answer have worn me out a bit over time. Today, I stop the email client on the phone regularly because I don't want those beeps every so often to interrupt what I'm doing or thinking about.

[There, it beeped for an incoming email just while I was typing this sentence and I'm suppressing the urge to read it. I should have turned on the filter before starting this post…].

It wound't help to carry two devices, one for business and one for private communication. In effect, it would just double my work. On both I would again need the filters because even when I am in the office I don't want to be reachable all the time. Ringing or beeping devices in meetings, no thank you. A filter for silent indication of incoming stuff, well, I do go that far and if it's a general meeting in which I do not have a stake in all topics discussed I even text under the table every once in a while.

On the other hand I like having access to information at a moment's notice. Being able to search for some piece of information instantly to help me with what I'm doing at the moment, browse through Wikipedia, read the news on my favorite web sites, follow the blogs on my RSS streams whenever I wish no matter where I am, that's were my desire for always-on connectivity comes from these days.

Sure it's also great that I can contact other people whenever I want, but I try not to be disappointed if I don't reach them right away. After all, don't they have a smartphone? Yes, double standards, but I'm working on it 🙂 So I've stopped asking people why they were not reachable or more or less unconsciously try to make them feel guilty by telling them that I failed to reach them before. Connectivity everywhere is misunderstood by many as reachability everywhere.

[An incoming phone call has interrupted me and I took the opportunity to read the email that came in a couple of minutes ago as well. The urge is strong].

I have also found a renewed love for fixed line numbers. Yes, those numbers that connect to phones that are tied to the wall with a cable or are cordless at most with coverage ending a few meters after the doorstep. While it is convenient for many things to reach people when they are not at home, I sometimes want to explicitly reach people only when they are at home and have time to talk. Yes, I know I could text them in such a case to let them know I want to talk with them when it's convenient, but it's not the same thing.

You've detected some inconsistencies in this post? Yes, it's a complicated topic…

In other words my network coverage needs have changed from being for "reach-in" to being for "reach-out".

Free – First Contact

Last week I was in France for the first time this year, at the lovely but icy cold Côte d'Azur. To my surprise, the new network operator "Free" who has just recently launched their own network in France was already there, even in snowy Sophia-Antipolis. 208 15 is their Mobile Country Code / Mobile Network code shown on older devices that were built before they registered their name in the SE.13 network name database. I couldn't roam into their network yet, but that is not very surprising given that they just launched less than a month ago.

And it seems their launch has brought quite some movement into the sleepy French mobile network landscape. With only three networks present, competition was relaxed and resulted in high prices. Free changed all that and for 20 euros a month, users can get an all you can eat unlimited calls and texts + 3GB of mobile data a month, finally bringing the country en par with prices in many other European countries.

The French are quite interested and there are reports that in the first month, Free has likely gathered over one million subscribers and mobile number portings are well beyond 40.000 a day, the maximum capacity the system was designed for. I'm a Bouygues customer and last week I received an interesting eMail from them informing me that, oh by the way, Free is not so special as everybody thinks, as Bouygues also has a 20 euro a month all you can eat plan, available on their website. And, it was stressed, it had that long before Free launched. Interesting, it must have been very well hidden on their website, I never saw it. But o.k. the eMail alone is quite telling.

In other countries, regulators are not faced with competition springing up but rather with networks trying to merge. Regulators have rejected such approaches recently in Switzerland and just lately in Greece. Rumors or deals in other countries, however, continue to spring up. Let's hope regulators take the time to have a closer look at countries such as France to see what the difference is between a three and a four network operator landscape. From a consumer point of view, the choice is simple and pretty much irreversible. If two network operators are allowed to merge, infrastructure goes away and is unlikely to be built again by another contender anytime soon.

42 MBit/s Smartphones Are Great, But Not Because of Their Top Speed

The first HSPA+ dual carrier smartphones are on the market now and I can already imagine of how they will be marketed: It can do 42 MBit/s!  That's easy for the press but it completely misses the point.

It's not the theoretical top speed that will make these phones better for users than their current models but their ability two bundle two 5 MHz carriers. While this allows for the theoretical 42 MBit/s top speed, the real sweet thing about this is that it also doubles cell edge performance. Here, the signal strength is low, interference is high and as a consequence, data rates are much lower than closer to the cell tower. Having the ability to bundle two channels in effect doubles the data rate in many places with a weak signal and high interference and that will be noticeable to users.

Another benefit not to be underestimated: Dual-carrier chips usually come with sophisticated interference cancellation technologies and perhaps even two antennas for diversity. This again will do wonders in areas where the neighbor cells and even data sent to other users from the local cell creates interference. 

Networks will be happy about such devices, too as the 64-QAM modulation and interference cancellation technologies will implicitly increase overall network capacity as data can be sent to such mobiles in much less time than to non HSPA+ devices. In other words, more time is left to communicate with other devices.

Let's see if the main stream press will take note of this at some point.

Interesting Data Usage Stats for 31st December 2011

Teltarif has recently reported (in German) some figures on fixed and mobile network usage in Germany on the 31st December 2011 which are interesting to play around with a bit. The article says that the Vodafone Germany network has transported 25 million megabytes between 8pm and 3am the other morning, i.e. in 7 hours.

Let's say that during that time the data rate on the interfaces to the rest of the Internet was more or less the same, what's the throughput during that time with this number? Here's the math:

  • 25.000.000 megabytes = 25.000 gigabytes (SI prefix system with 1 GB = 1000 MB)
  • 25.000 GB * 8 = 200.000 gigabits
  • 200.000 gigabits / (7 hours * 60 minutes *60 seconds) = 7,93 GBit/s

7,93 GBit/s quite an impressive number, that's (7,93 GBit/s * 60 seconds * 60 minutes) / 8 bytes = 3.5 terabytes per hour of data transferred.

Let's just assume for a second the data rate would always be that high, which is likely not the case since this is a high load scenario, but just for the fun of doing it and comparing it to something else I have in mind the amount of data transferred per day would look like this:

  • 3.5 TB/h * 24h  = 84 TB per day

Again a high number, but already back in 2010, 3UK reported that they shuffle 100 TB a day through their network. The numbers are in the same ballpark coming from different network operators which helps to ascertain that at least the order of magnitude is correct (I'm always a bit cautios with such reports as they don't contain a lot of detail how and what exactly was measured and went through legal and public relation "washing machines" for a while). What's a bit odd is that Vodafone Germany has around 36 million customers in Germany while 3UK has around 7 million.

I would have expected that Vodafone's number is actually higher, especially as their number was not an average but during a peak load scenario while 3's number sounds like a daily average. So assuming the numbers are correct, perhaps the difference is due to pricing!? 3UK offers data very cheaply on prepaid and in very high quantities (15GB for around 18 euros and it's possible to top-up again once the limit has been reached). Vodafone Germany is not quite at this point yet (5 GB for around 24 euros). I don't have numbers on this, so this part is pure speculation.

Despite the difference in customers, however, the number of base station sites seem to be similar with 3UK saying they have about 12.000 and Vodafone Germany saying they have about 13.000 (the number is from 2009, they probably have added some more in the meantime).

Why the US Needs LTE Smartphones in 2012 and Why They are Not Needed and Wanted in Europe

The CES 2012 has come and gone and I am quite amazed about the kind of spin even some technically sound German tech websites (here and here) have put on US LTE smartphones and why we are not seeing them over here in Europe. Their spin is that the US is far advanced with their LTE smartphones and Europe is lagging behind. Actually it's quite the other way around if you don't let yourself be blinded by the words LTE and 4G. Here's why:

In the US, carriers like Verizon and Sprint have a problem with their CDMA networks: They are quite limited in terms of performance and capacity in their current deployment state to a few hundred kilobits up to perhaps a megabit or two per second per user. The development on this radio technology has come to an end, quite to the contrary to W-CDMA (UMTS) which goes from strength to strength with its HSPA evolution path. Verizon's and Sprint's networks have become crowded and they had to resort to introduce LTE as quickly as possible to get further capacity and also higher speeds per user. The downside of this is that current Verizon phones run two radio chips simultaneously, one for LTE data and one for CDMA over which voice services are handled. As a result the smartphones are bulky and battery performance is an issue. For details see here.

AT&T is in a slightly better position with their HSPA network as they could build out their network to perform well and offer sufficient capacity. It would spare them the trouble of dual radio devices and the issues described above by going along this route for smartphones. AT&T may opt for CS-Fallback for voice instead of dual radio but the downside of that would be significantly longer call establishment times and higher call setup failure rates (at least by European standards). So they could do the smart thing and use LTE for dongles, tablets and other non voice devices for the moment. But it seems everything that is not LTE these days in the US is seen as inferior by the public and it's difficult to market around it. It's true, LTE is superior for pure data services on a 20 MHz carrier (only used in in Europe for the moment) but not with the 10 MHz LTE carriers used in the US and not for smartphones were a quick and reliable voice service is still important.

Let's have a look to Europe. It is true that in most countries, LTE is not yet deployed. There are some exceptions, notably the nordic countries and Germany. But here, the choice of all network operators has been to focus on data only devices for LTE. With the introduction of HSPA+ and data rates well in the 30 MBit/s range in unloaded cells when HSPA+ with dual carrier is used, the technology can deliver at least as good a throughput as 10 MHz LTE deployments. In addition, voice service is integrated, which means no bulky smartphones due to dual radio are necessary or CS fallback ruining your call setup performance. While AT&T's HSPA network is under constant criticism, HSPA networks over here perform well and there are no signs that this will change anytime soon. So offloading the dongle users to LTE that use much more data than smartphones anyway and keep the smartphones on the evolving HSPA network is the much smarter choice. And by the way HSPA+ networks are still assumed to be 3G over here in Europe and don't have to be marketed as 4G like in the US.

I can see why the mainstream press is easily fooled by terms such as 4G and LTE which are newer than 3G and UMTS but for smartphones, well built UMTS networks continue to outperform LTE on smartphones. Don't get me wrong, there's nothing wrong with LTE, it's a great technology, but until the voice issue is solved in one way or another it just doesn't make sense for smartphones.

The more time that passes by the more it seems likely to me that dual radio smartphones will shrink in size and the power consumption overhead by dual radio diminishes. Once those two values are right it might be just another solution to the voice problem, even for European operators and users.

Wi-Fi Protected Setup (WPS) Insecurities

At the end of 2011, Stefan Viehböck published a paper on the insecurity of the Wi-Fi Protected Setup (WPS) protocol and how implementation flaws make it even worse. With code to exploit these weaknesses now in the public domain, WPS enabled routers are easily crackable under certain circumstances that seem to be widespread. There's lots of information on this to be found on the web in the meantime and since I think this is an issue not to be underestimated if your neighbors have kids who spend their afternoons with the latest hacker tools I thought it was time to learn a bit more about it and collect some sources for further reading. Here's the result:

The initial weakness found was that many routers on the market today have WPS activated by default with a PIN printed on the device which allow an unlimited number of WPS pairing attempts. Due to the length of the WPS pin, a brute force attack on the system is successful within a few hours. This is the what was discovered by Stefan and described here, with a Wikipedia entry here and a US CERT vulnerability note note here.

If a router implements WPS in this faulty way the only solution is to turn WPS off, hope for a software update in the future and for the moment rely on the WPA-PSK password authentication scheme, which is just as simple to use and much more secure anyway. As it turns out, there are products out there where WPS can't be switched off at all, or, what's even worse, where the Web GUI has an option to turn it off but it remains activte nevertheless.

Better WPS implementations have a safeguard against this by:

  • limiting the number of attempts that can be made before WPS pairing is blocked for some time
  • using a different PIN for every pairing attempt
  • limiting the pairing time to two minutes

Unfortunately that does not solve the whole problem. If an attacker is able to record a successful WPS pairing between two devices it's possible to retrieve the authentication details in an offline brute force attack in a reasonable amount of time due to the length of the PIN of 7 characters + 1 checksum character. Fortunately, the odds of being able to intercept a WPS pairing and then performing an offline brute force calculation of the credentials are much smaller than an active brute for attack, as the attacker has to intercept the WPS. A good explanation of this can be found in episode 337 of my favourite weekly security podcast 'Security Now'.

So for people who like their home networks to be secure, the best advice is to turn WPS off. Good luck!

Update, 6. Feb. 2012: Episode 338 of Security Now has an errata early on in the podcast in which it is made clear that it's NOT possible to get the WPS PIN and WPA key by observing a successful pairing and then cracking it offline. This is because at the beginning of the PIN exchange a Diffie-Hellman key exchange is performed to encrypt (not authenticate!) the reset of the conversation. This prevents the offline cracking approach.

IP Transit Prices Continue To Fall

An interesting parameter in the Internet world is how much a carrier has to pay to send a certain amount of data to another part of the Internet. The currency for that is peak bandwidth requirement, i.e. the peak MBit/s going through an interface (at the 95% percentile). So if a carrier has 20 GBit/s a second that goes to or from him it's going to cost him 20 * 1000 = 20.000 MBit/s. According to Telegeography, the current price per MBit/s is around 4 Euros in London and New York, down from over 4 times that price only 4 years ago. So for the 20 GBit/s peak, the carrier has to pay 20.000 * 4 Euros = 80.000 Euros a month. Sounds a lot, but 20 GBit/s is also not a negligible number.

How many DSL lines might that be? Obviously, you can't device the 20 GBit/s through the average MBit/s a DSL line offers as most of the time, the link is unused and there is some sort of statistical multiplexing between the users of a network. Also, most mobile TV offers terminate in the network operator's network, so that traffic doesn't go over an IP transit line. A couple of years ago, the over-subscription rate between the line rate and the backhaul from the DSL access multiplexer was said to be in the order of 1:50. Perhaps its a bit more today with all those Youtube videos, so let's say it's 1:25 Assuming DSL lines with an average speed of 8 MBit/s (which includes much slower and much faster connections) each line, 20 GBit/s would be enough to support (20.000 MBit/s % 8 MBit/s) * 25 = 62.500 lines. Dividing the 80.000 euros a month for the IP transit by the 62.500 lines results in about 1.30 euros per line per month for the transit.

The numbers in the second paragraph are all assumptions. If you agree or disagree with the numbers and have references for one or the other parameters, please consider leaving a comment, I'd be quite interested.

Obviously, the IP transit costs per line as calculated here is only one part of the overall costs a network operator has for its DSL networks. All the cables, the DSLAMs, central offices and the capital and operations expenditure for that are the other part and have nothing to do with the IP transit costs.

St. Helena: No Cable, no Broadband Internet, no Wireless

SthelenaMy friends regularly mock me for checking network coverage for places I plan to go on vacation to and that I would feel uncomfortable the moment I ran out of coverage. But it's probably true. So you will understand my shock and disbelief that there is a UK overseas territory in the South Atlantic that has no wireless network (not even GSM). Even worse, landline calls and Internet connectivity to anyone but the 4000 inhabitants is prohibitively expensive.

I'm talking of St. Helena, a small island somewhere in the middle between South America and Africa. Once an important stronghold for shipping it is now an island pretty much apart from the rest of the planet. An old satellite is used for voice and Internet connectivity with prices between 29 and 119 pounds for data buckets between 300 MB and 3GB. No this is not over wireless, these are data buckets for DSL lines. On top, the speed over the Satellite for all inhabitants is limited to 10 MBit/s. That makes the 25 MBit/s I have on my DSL line at home that I only share with the household and not 4000 other people shine in a totally new light. You can imagine how "busy hour" on that network looks like…

The British government wants to build an airport on St. Helena to stimulate tourism. But really, who wants to go there when Internet connectivity is limited at best and your iPhone can't communicate with the rest of the world? Ten years ago, this might still have worked. Today only those suffering from communication overload might consider it. I doubt one can fill planes that way.

But there's a solution, and it's called SAex, a new submarine cable about to be laid between South America and Africa, just passing by St. Helena at a distance of about 50km. Currently, there are no plans to connect the island which I would find quite a shame. But perhaps there's a chance with initiatives such as "Connect St. Helena" creating awareness of this once in lifetime opportunity.

In the meantime the press has caught wind of the opportunity as well, such as here and here. The Facebook page of the initiative is here. An airport is a nice thing to have, but in the 21st century, connectivity to the rest of the world is not just planes and ships anymore…