IPv6 Crash Course – Part 1

IPv6 has been 'just around the corner' for many years now but hasn't really surfaced so far. For LTE, Verizon currently plans to make IPv6 mandatory and IPv4 optional, which will probably mean in practice that the first thing most devices will do is to get an IPv4 address. One way or the other, however, IPv6 will show up one day and I've been wanting to have a look at it for many years now. So I decided to picked up "Migrating to IPv6" from Marc Blanchet to do a little crash course.

The book is quite massive and you could probably spend weeks absorbing all the details. For my purposes, however, getting the basics is just enough for now so I know how things work in general compared to IPv4. So I skimmed through the chapters relevant for me and here are the most important details which will be important to remember:

128 Bit Addresses

The first big thing that most people know about IPv6 is that the IP addresses are getting longer to fix the issue with v4 addresses slowly running out these days. While IPv4 had 32 bit addresses (e.g. 10.124.30.2), IPv6 uses 128 bit addresses and uses a different notation (e.g. 3ffe:20e3:4000:abcd:0102:9988:7766:0001) which looks quite different and complicated at first if you are accustomed to IPv4 addresses. Addresses starting with fe80are link local addresses, i.e. they are not routed beyond the local network borders. IP addresses starting with 2000 to 3fff are global unicast addresses that can be used to reach any server worldwide that is connected to the IPv6 Internet.

In IPv4 there's a subnet mask that defines which part of the IP address describes the network part and which part identifies the hosts in a network. This is necessary to have the flexibility to be able to address both small and large local networks despite the limited address range. In IPv6, things are much more static and simple, a luxury of the 128 bit address range. A local network always gets the lower 64 bits assigned to address hosts. Wow, that's twice as much as all of the current IPv4 32 bit address space. The upper 64 bits are used for routing. Out of those, 48 bits are used for global routing and 16 bits are used within ISPs to serve their clients. With 16 bits, each ISP can serve 65535 clients that can then each have 2^64 hosts. That should last for a while. What is known as a subnet mask in IPv4 becomes the network prefix in IPv6 which is (almost) always 64 bits as described above.

The downside of 128 bit addresses is that the IPv6 header is significantly longer than the original IPv4 header. IP header compression will therefore become very important in wireless networks to minimize the waste of bandwidth.

Optional Headers

Many IPv4 header fields are not always used today and for IPv6 it was decided to move them into optional extension headers. If there's a need for them, they are appended, otherwise they are left off. ICMP for example (see below) is put into an extension header.

That's it for part 1. More in part 2 which is coming up soon.

The Future of Communications

I like CTOs who have a vision that goes beyond what might or might not be possible in the immediate future. One of those is definitely my ex-boss John Roese who's come up with two blog posts on how the future of communications could/should look like (see here and here). His approach is interesting: Let's stop thinking about how the technology we have today enables us to communicate. Rather, take a step back an think about how it should be or how you would like it to be without regard as to how that could be implemented with today's or tomorrow's technology you might be aware of. Two truly thought provoking blog entries, highly recommended!

LTE Dynamic and Semi-Persistent Scheduling

A technical post today – Here's some background info on how users are scheduled on the LTE air interface:

Dynamic Scheduling

In most cases, scheduling will be fully dynamic. In downlink direction resources are assigned when data is available. For data to be sent in the uplink, the mobile dynamically requests transmission opportunities whenever data arrives in the mobile's uplink buffer. Information about data being sent in downlink direction and uplink transmission opportunities are carried in the radio layer control channel which is sent at the beginning of each sub-frame.

Semi-Persistent Scheduling

While dynamic scheduling is great for bursty, infrequent and bandwidth consuming data transmissions (e.g. web surfing, video streaming, emails) it is less suited for real time streaming applications such as voice calls. Here, data is sent in short bursts while at regular intervals. If the data rate of the stream is very low, as is the case for voice calls, the overhead of the scheduling messages is very high as only little data is sent for each scheduling message. 

The solution for this is semi-persistent scheduling. Instead of scheduling each uplink or downlink transmission, a transmission pattern is defined instead of single opportunities. This significantly reduces the scheduling assignment overhead.

During silence periods todays wireless voice codecs stop transmitting voice data and only send silence description information with much longer time intervals in between. During those silence times the persistent scheduling can be switched-off which is probably why it's called semi-persistent scheduling. In the uplink, the semi-persistent grant scheme is implicitly canceled if no data is sent for a network configured number of empty uplink transmission opportunities (see 3GPP TS 36.321, chapter 5.10). In downlink direction, semi-persistent scheduling is canceled with an RRC message.

The logical question that follows now is how the network can figure out when and for which packets to use semi-persistent scheduling. The answer is QCI and dedicated bearers. More about that in a follow up post.

Some Wi-Fi Draft-N Performance Measurements

Iperf-up I've been running my 802.11n capable Fritzbox only on the older and slower 11g standard so far due to the lack of a notebook or other equipment being able to do anything faster . Also, with a sub 10 MBit/s DSL connection you don't really need more anyway. But now with a new notebook and a 25 MBit/s VDSL line to be installed, things have changed. So it's time now to do some performance measurements.

Here's the baseline: The Wi-Fi access point is in an adjacent room from the office and iPerf gives me a 802.11g throughput of around 21 MBit/s both in the office and also when I move close to the access point. The iperf server was running on a computer connected by Ethernet cable to the access point. That's pretty much the top speed for this version for the standard that can be reached in practice. On the client side I was using a notebook with an Intel 5100 AGN wireless chipset.

When both the iperf server and client machines were connected to the access point via an Ethernet cable, tops speeds reached 90 MBit/s, that's about the top speed of a 100 MBit/s twisted pair Ethernet connector. That's important to remember.

Next, I switched on the 802.11n capability in the Wi-Fi access point and switched to WPA2 encryption as the somewhat older access point chipset only runs 11n which this encryption. I also configured it for a 40 MHz channel, which is twice as wide as the channel used by the older 802.11g standard. 

For the first test, I operated the access point in the standard 2.4 GHz band. Tops speeds were around 70 MBit/s, again both in the office and close to the router in uplink and downlink direction. During the transmission, the status window on the notebook showed a link speed of 135 – 150 MBit/s. At this point I thought that was not too bad.

For the next test I switched to the 5.8 GHz band as it's supposed to be less crowded and the neighbor search on the access point showed no other base stations in sight. Close to the access point I was also able to get 70 MBit/s. In the office, however, the top speed dropped down to only 30 MBit/s!? So either the higher frequency range couldn't handle the wall and additional distance, or the antennas in the notebook or access point were not optimized for this frequency band. O.k., I'll stick to the 2.4 GHz band then.

For a final test, I restricted the channel bandwidth to 20 MHz and I expected to see half of the 70 MBit/s I've seen earlier. To my surprise I still got 58 MBit/s out of the channel. So either the neighboring access points were interfering more with the 40 MHz wide channel or the access point can't really keep up with the transmission rate due to ciphering or other computationally extensive tasks and the air interface would have been capable of much more with the 40 MHz channel.

Another interesting number is the data rate generated by the TCP acks in the return direction. Running a 70 MBit/s TCP data flow in one direction requires a bandwidth of more than 1 MBit/s for the acknowledgments. That's quite something! That compares to a bandwidth requirement of around 400 kbit/s for the 21 MBit/s I got out of the baseline configuration.

Receive gaps One thing I couldn't quite figure out where frequent throughput drops. Suddenly during a transmission, the throughput would drop down to just 2-3 MBit/s for tens of seconds and then just as suddenly things went back to normal. During those times I started a download over my 3 MBit/s DSL line to see if the throughput increases but I couldn't really see that. So either the notebook or the access point seems to have a problem. I'll get an access point with a new chipset soon and I hope I won't see this again. The second picture on the left shows the behavior in more detail.

And one final number to contemplate: Over the course of only one and a half hours I transferred over 20 gigabytes of data. Not too difficult at those speeds.

While the measured speeds are already quite impressive I expect that things can go even faster especially in a 40 MHz carrier with newer chipsets. I'll keep you posted.

Cheap 2.6 GHz Licenses in European Nordic Countries

When the UMTS licenses were auctioned in Europe back in 2000, new and old network operators in some countries spent enormous amounts of money in license auctions. In Germany for example, the record sum of 50 billion euros was paid by six companies. These days there's a new round coming up or has already taken place for frequencies in the 2.6 GHz band, foreseen to be the main band for the launch of LTE. Interestingly enough, they were sold pretty cheaply in Nordic countries. According to IntoMobile, Finland just sold the licenses for 3.8 million euros. Norway and Sweden's process is already over as well and the proceedings brought 25 and 230 million euros respectively.

I don't know much about the terms and conditions of these auctions but it looks like this time around, things were a bit more realistic. Even when taking the 230 million euros paid in Sweden and adapt it to the number of people living in Germany, it would still 'only' have been 2 billion euros. A tiny fraction of the 50 billion for the UMTS licenses. I hope all players in other countries are as sensible when it comes to new spectrum auctions. After all, have you seen where those 50 billion euros went in Germany after the auction?

How Many Voice Calls Can You Squeeze Into 1 MHz?

The air interface is the scarcest resource of a mobile network and the industry is therefore not only looking to improve peak data rates but also to improve efficiency under everyday conditions. Voice calls are and probably will remain for quite some time to come the most popular mobile service so reducing the overall amount of spectrum required for it to have more room for bandwidth intensive applications is an appealing goal.

Holma and Toskala's book on LTE I reviewed recently has an interesting analysis of this topic. The results sound quite amazing to me so I thought I'd share them with you. In their chapter on VoIP they compare the air interface efficiency of GSM, UMTS, HSPA and LTE and here are some of the highlights:

Voice over GSM, UMTS and HSPA is circuit switched in nature in their study. Therefore they have the advantage that there is little overhead incurred by the different layers of the IP protocol stack. GSM voice capacity was calculated for the Enhanced Full Rate (EFR) and Adaptive Multi Rate (AMR) codecs mostly used in today's GSM network. It's not straight forward to calculate how many voice calls fit into 1 MHz, as adjacent GSM base stations have to use different channels to avoid interference. Due to the modulation, directly adjacent channels can't be used and interference is countered by using hopping carriers, i.e. the transmission frequency is changed for each frame. Taking all these and other things into account they come to the conclusion that there can be around 4 EFR calls per MHz or 8 AMR calls. Quite a difference already.

The same calculation for UMTS and HSPA is probably a bit simpler as all base stations use the same frequency. Interference from neighboring base stations are part of the design and limit the available bandwidth in neighboring cells.  The more load in a cell, the more interference in neighboring cells and the less capacity there. With a simulation of a real life scenario Homa and Toskala estimate the voice capacity per MHz of UMTS of around 12 calls and of HSPA of around 24 (both AMR 12.2). Note that voice over HSPA is not yet deployed in life networks as it's a relatively new feature. For details have a look here.

And now over to LTE. Like for the other technologies, the authors have taken lots of layer 1, 2 and 3 mechanisms into account like for example what's the efficiency of using a 1 ms TTI for a single 20 ms voice frame,  how buffering several voice packets and then sending them together impacts performance and latency, different voice codecs, dynamic vs. persistent scheduling, use of signaling resources, etc. etc. The surprising result, at least to me, is that voice capacity is even higher as for HSPA and they estimate it to be 50 parallel calls per MHz for AMR 12.2 and over 80 parallel calls per MHz with AMR 5.9.

Summary

The numbers are stunning and offer interesting opportunities in the future. According to these numbers LTE is 10 times more efficient to transport voice calls than the current GSM deployment. That is, of course, if the voice calls are controlled by the operator and all optimizations used for the calculation are put into the game. For over-the-top VoIP, that's hardly going to be the case.

Opera Mini – Still My Preferred Browser

I've been playing around a bit with some devices lately with 'full web browsers' (again). However, no matter how hard I try to convince myself that this is the way forward, I'm still drawn back to OperaMini on my N95. Especially when traveling to work each day by train, Opera Mini loads compressed web pages much faster than browsers that download the full pages, especially in spots where only 2G coverage is available. Also, navigating on pages with hardware keys by pressing number keys to scroll up and down is much faster than moving a finger over the touchscreen. It might not look as nice and it takes a bit to learn the key combinations but it is much more convenient and it only takes one hand to browse the web.

Impressive List of Android Device Manufacturers

Interesting how within just about one year after the release of the first Android phone in Q3/2008, the list of companies that have launched or are close to launching an Android phone has gone far beyond just HTC. Here's an overview:

  • HTC 
  • LG
  • Samsung
  • Motorola
  • Sony Ericsson
  • Huawei
  • Dell
  • Acer (o.k. a netbook, but let's count them, too)

The list's probably not complete and there are likely also some smaller less known Asian manufacturers who are in the game as well. So except Nokia and Apple, all major manufacturers are now in the boat, most with a multi-OS strategy. That could kind of make the air a bit thin in the future for other open OSes such as Symbian and Maemo. I guess Google must be pretty pleased with the success of their platform so far.

I still remember that not so long ago, few would have believed such a rapid development was possible. That includes me.

NTT DoCoMo To Switch-Off 2G in 2011

The news web sites have it today (here, here and here) that NTT DoCoMo announced today that they plan to switch-off their 2G network in March 2011 and solely rely on their UMTS and LTE networks afterwards. Wow, a great step but they are likely to remain an exception for quite some time to come.

The reason behind that is that DoCoMo uses a 2G wireless technology that's pretty much incompatible with anything else in the rest of the world. In other words, they won't loose a lot of roaming charges with this move.

By switching-off their 2G network they'll significantly save money on two fronts: First, there's one network layer less to to keep running so that certainly saves a great deal of money. Second, they no longer need proprietary dual mode 2G/3G devices and can go forward with dual mode GSM/UMTS mobiles that are sold in the rest of the world (+ an additional frequency band, see below) or triple mode GSM/UMTS/LTE devices. And while we are at it, does anyone know if current devices are still dual mode or has DoCoMo phased this out already and just keeps the 2G network running for legacy devices?

And a final thought on this one for today: Looks like in Japan DoCoMo doesn't only use the 2.1 GHz band for 3G but also a band in the 800 MHz region (FOMA Plus) which is also used by their 2G PDC system. Here's some more details on this from a report on a Blackberry version for Japan. That report also indicates that the band is different from the 850 MHz band used in the Americas. It would be too simple otherwise… So once they switch off their 2G system, they can do further re-farming. Also a nice benefit.

The 3G Stick on the Way Into the Notebook

No-more-3g-stick When you walk through any town in Germany and many other countries these days and have a look at what's advertised in mobile phone stores, it's usually phones (naturally) and notebooks or netbooks with a 3G USB dongle for a reduced price. Some stores have now started to differentiate a bit and now advertise net-/notebooks with built in 3G connectivity. For most people it makes much more sense to have the 3G card inside, it just takes less space and you can't forget to take the dongle with you. But there are also some major disadvantages.

  • A 3G USB dongle can be used with several computers and at least for me that counts for something.
  • Also, it's easy to exchange the SIM card, which I do a lot when traveling. I expect, though, that most people won't care about this one.
  • And then there's reception. In most parts of Europe, UMTS is still only on 2.1 GHz except for a few places with 900 MHz coverage. In other words, in-house is far from optimal in many places. So every now and then I am very happy about a USB stick solution that I can extend with a 2-3m USB extension cable and hang over a lamp or to place the 3G stick close to a window for better coverage and faster speeds.
  • And then the stick can be used as the receiver for a 3G/Wifi bridge such as this one. Again, most people don't care but I like it a lot. When I travel alone, the stick is in the PC but then it's great to be able to share the connectivity when the need arises.

So do you think 3G USB sticks will mostly be integrated into netbooks and notebooks over the next couple of years or will the majority of network operators and users prefer an external solution?