Do cheap 3G licenses push coverage?

So far I always thought that high 3G license costs like the 50 billion euros that were spent in Germany would inhibit growth, deployment and use of 3G networks and services. I am not so sure anymore as according to the ITU, 3G licenses were given out for free in Finland with only a modest administration fee of 1000 euros per 25 kHz (per year I assume according to this article).

Even if you consider the difference in population (roughly 80 million Germans vs. roughly 4.3 million Finns), 0.2 million euros for 5 MHz a year is still next to nothing. While Germany has a widespread deployment of 3G networks in 2006, Finland seems to be far behind with only a few major cities covered (take a look here at the GSMA coverage maps). Also, there doesn’t seem to be a big difference in 3G pricing. Only recently, Finish carriers seem to have introduced interesting data tariffs which let you surf the web for 20 euros a month with a bandwidth limited to 128 kbit/s. Full speed access is available for 40 euros a month. Similar offers are available in other countries as well these days where operators have paid a lot more for the licenses.

So to me it looks almost the other way around from what I have initially thought if you only look at these two examples. The more expensive the licenses, the more eager companies seem to be to deploy the technology. After all, if you have spent more money for the licenses then what is necessary for buying and deploying a network it doesn’t seem to hurt that much anymore.

Two more examples to make the confusion perfect:

  • Japan: 3G licenses were given out for free as well according to this Gartner article. But contrary to Finland, 3G is a major hit in Japan these days.
  • France: Only two companies have deployed a UMTS network, with a third operator slowly starting in 2006. Cost for the licenses were 4.5 billion euros per operator. 3G coverage, however, is still limited to major cities in France (see the link to the map above again). Cities below 100.000 inhabitants usually don’t have UMTS coverage by the end of 2006. So in this country, 3G coverage is quite limited despite very high licensing costs.

Strange, though, that end user prices seem to be pretty much on the same level, no matter how much operators had to spend on licenses. Anyone’s got an explanation for this?

Summary: I think it’s pretty obvious that licensing fees do not have the same impact in each country as far as coverage and end user pricing is concerned. There seem to be other factors which are different in each country which have a much more profound impact.

Podcast: Wireless Operator Landscape in Finland

Nf
This is the third podcast in my series about how wireless operators in different countries around the world offer the wireless Internet to their users. This time I am focusing on Finland, or Nokialand as some people call it, with Nicolas Fogelhom, whom I guess many of you know from his blog about-nokia.com.

From the podcast:

  • The N70 radio is his number one application
  • Camera and mobile phone browsers as driver for the mobile Internet
  • Mobile network operators in Finland: TeliaSonera, DNA and Elisa
  • Unlimited volume data plans, limited bandwidth from Elisa. For those of you speaking Finish, take a look here for further information.
  • Competition on the Finish wireless market.
  • GSM/UMTS coverage in Finland. Also take a look here for GSMA coverage maps.
  • Percentage of UMTS phones and current phone subsidies.

The Podcast, 20min, MP3, 14MB

The previous podcasts in this series:

T-Mobile US: 3 Billion Euros for Licenses, 2 Billion for the Network

Recently, T-Mobile USA has acquired the necessary frequencies to deploy their 3G UMTS/HSDPA network in the U.S. The cost for the licenses amounts to about 3.3 BILLION euros. Compare this to the costs for hardware and network deployment of about 2 billion euros. While thousands of engineers in companies have worked to build the hardware and software for the networks, less money goes to them than to the government which hasn’t really done a lot to earn that money. Or has it!? Please enlighten me… I know, it was an auction, but I just find that imbalance somewhat strange. Seen from a different point of view, over half of the money they will charge for network access and services later on do not go to the people who’ve worked on the technology but to a third party.

P.S.: I know the imbalance is much bigger in other countries, like for example in Germany where UMTS licenses have been bought for 12 BILLION euros per operator at the time…

My N95 GPS Killer Application

O.k., o.k., so the hype around the Nokia Open Studio in New York last week has calmed down a bit. However, I still keep thinking about all those new possibilites with the Nokia N95’s built in GPS receiver module and navigation software will have. So here’s my killer application for it:

Regularly, just like two days ago again, I am talking five minutes on the phone to tell people where I am at the moment and where they can find me. Sometimes it’s complicated because I don’t really know where I am exactly and sometimes the other person does not know his/her way around in the city. Three calls and half an hour later we usually meet. No more with the N95! To tell somebody where you are, you just need to enter the navigation application and send your current location via SMS to the other party who hopefully also has an N95 in his/her hand. The SMS is usually received instantly and the navigation application can take my coordinates right out of the SMS. Two or three clicks later and the software calculates the shortest way to me. It’s already in the package. Take a look at this video which shows the mapping software on the N95.

There’s one thing that is critical for the scenario above and I hope they’ve done it right:

  • The startup time of the GPS module should be less than 20 seconds to the first fix.

Other potential killer applications which I would wish for:

  • Put location data into taken images
  • Plan a trip on the PC, download trip info to the N95 and then navigate with it
  • Take a picture, geodate it, upload to Flickr or another site for others to see in their N95 mapping application.
  • … much more once I get such a phone into my hands to try it out for myself.

WiMAX details from Intel

While doing some research on which frequency bands mobile WiMAX will be specified for I found this interesting website from Intel in which they give some information about the upcoming Rosedale 2 chip. They say that Centrino notebooks with the chipset will support both Wifi and WiMAX.

Supported WiMAX Frequency Bands

A link to this .pdf document gives further info on the bands supported by the chip. Looks like it will cover the most important band for the U.S., which is 2.5 GHz and also the 3.5 GHz band which will be used in Europe and Latin America. Good preconditions for mobile roaming. I wonder which bands will be used in Asia!? Also, the pdf document gives a good introduction to the WiMAX network architecture and how Intel plans to integrate WiMAX into notebooks.

Multiple Wireless Technologies with one SIM card

Also, take a closer look at the figures at the end of the document. Looks like Intel would like to see interfaces for authentication and billing into 3GPP2 networks (i.e. current EVDO networks). This would make quite a lot of sense for carriers like Sprint who will deploy WiMAX networks along their already existing EVDO networks. Such an interface would be required for what Intel calls "smart card re-use […] for support of seamless authentication while roaming across technologies" which Intel says "is under investigation". The wording suggests that they are not sure about this detail yet.

TCP settings for HSDPA and ADSL

HSDPA, the 3.5G speed booster for UMTS networks is up and running in many neworks these days and data cards have been available for some time. Now, HSDPA phones have entered the market with Samsung’s SGH-ZV50 and Nokia in close pursuit with the N95. As an article in the latest C’T, a german computer magazine, points out, some tweaks are required for the TCP/IP stack of the notebook to achieve full performance.

The tweak mainly consists in increasing the "TCP Receive Window" to 128480 bytes. The window is used to throttle a data transfer by using receiver acknowledgements which advance the receive window. This prevents receiver buffer overflows in the routers between source and destination which would appear if the sender has a faster connection to the to the network than the receiver. As HSDPA has a delay time of about 150ms, which is about three times higher than an equally fast DSL connection, three times more unacklnowledged data can be in transit. An appropriate window size should be much higher than the bandwidth delay product (BDP) of the connection. Take a look here for further explanations. For a 1800 kbit/s HSDPA link and a round trip delay time of 150 ms plus let’s say an additional 80 ms delay in the Internet, the BDP is about 52 kBytes.

Other values Vodafone suggests to change is to increase the "Maximum MTU size" (packet size) to 1500 and to set the "Maximal TCP connect request retransmissions" parameter to 5.

The Test Run

To see if changing the values has a positive effect, I gave it a try myself. I don’t have an HSDPA mobile yet (it’s all in Nokia’s hand…) so
instead I tried the settings on my fast ADSL2+ Internet connection.
While the round trip times of the ADSL line are quicker than HSDPA, the total bandwidth of 7 MBit/s (7000 kbit/s) I get on the line is much higher than HSDPA. Thus, changing the TCP window size should have an impact as well. To compare, I went to a web site that does not throttle transmissions on its end and downloaded a file. I achieved a download speed of about 3.5 MBit/s. With
the new TCP window setting the speed went up to an amazing 7.000 kbit/s, i.e. the line was fully utilized. So if you
have an ADSL or ADSL2+ connection with a speed exceeding 3.5 MBit/s the
tweak is not only helpful for HSDPA but also for your ADSL connection
at home.

How to Change the TCP Window Size

The TCP Window Size for Windows XP can be optimized with programs like Tweakmaster or TCPOptimizer. While Tweakmaster is easier to handle, their registration process for the free version is somewhat dubious. TCPOptimizer is really free but only seems to be able to change the settings for all network cards at once instead of individually.

It’s also possible to change the parameters manually in the Windows registry. For network cards, the TCP receive window can be changed on a per adapter basis in the following registry key: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParametersInterfacec{Interface ID}.  If not already present, create a new DWORD called TcpWindowSize  and assign it a value of 64240 (decimal). I also tried to set it to 128480 but for some reason the value is not accepted and the standard window size of 17520 bytes is still used after a restart.

For dial up connections things seem to be handled differently by the operating system. Here, global values which are valid for all network connections have to be changed. These can be found in the following registry key: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters. If not already present, create the following DWORDs and assign them a value of 128480 (decimal): TcpWindowSize and MaxTcpWindowSize.

To activate network card specific changes the adapter has to be deactivated and activated again. For global values to take effect a reboot is required. If HSDPA is not always available it might be a good idea to remove these values before using a slower connection. This might proove to be somewhat impractical to do on a day to day basis due to the required restart.

To simplyfiy the process I’ve created to short scripts which add or remove the TcpWindowSize parameter from the list of global TCP parameters. Still, a reboot is required. You can find the two scripts at the end of this blog entry. The script to add the parameters is a .reg file so it can be executed by double clicking on the icon. The script to remove the parameters again is a .inf file which has to be executed by right clicking on the icon and selecting "execute" from the menu.

Bluetooth and HSDPA

A word to HSDPA mobile phone manufacturers: Make sure you put Bluetooth 2.0 into your phones as version 1.2’s top speed of 723 kbit/s is far too slow for HSDPA. I don’t want to be stuck with a cable again!

Download add_128k_tcp_window.REG

Download remove_128k_tcp_window.inf

802.11n: Next Generation Wifi moving forward

The recent podcast by Glen Fleishman with Matthew Gast, author of "802.11 Wireless Networks: The Definite Guide", published by O’Reilly is one of those rare tech podcast jewels which are both entertaining and interesting. Having read Matthew’s book a while ago it was also interesting to learn a bit more about the author behind it.

With lots of more or less compatible Pre-N Wifi and Mimo products coming out these days and less encouraging news about the state of discussions in the 802.11n standards group, it was also good to hear an update from somebody involved in the process.

Here are my thoughts triggered by the podcast:

State of Discussions (September 2006):

The current version of the IEEE 802.11n standard is called Draft 1 and was released earlier this year. Draft 2 will be out in the middle of 2007 and should resemble the final standard with only minor modifications before final acceptance.

The Wifi standard is driven by two bodies: The IEEE standards group which is consensus driven and the Wifi Alliance which is market driven. The Wifi Alliance is an industry group which ensures interoperability of devices of different manufacturers with a certification program. You might have seen their "Wifi" certification label on various products before. While deliberations on the standard are still ongoing a number of different companies, however, have already started to ship "Pre-N" products, many of them not compatible with each other. As this hurts the overall Wifi eco system, the Wifi Alliance has decided to launch a "Pre-N" certification program shortly to tackle the situation.

802.11n for power and size constrained devices

The main goal of 802.11n is to increase the data rate from todays 54 MBit/s (on the physical layer) to 100, 200, 400 MBit/s and beyond. Matthew states in the podcast that first devices will probably achieve about 200 MBit/s with higher speeds to be seen in the future. Faster speeds, however, increase power consumption due to increased signal processing requirements and requires multiple antennas. While this is less of a problem for devices like notebooks, meeting these requirements with small devices like PDAs and mobile phones is more difficult. Thus, the 11n working group is defining the standard in a way to allow mobile phones and other small devices to implement fewer options and thus optimize power consumption and size requirements. While such devices are slower, the standard is designed in a way to allow them to still be compatible and interoperable with chipsets for devices like notebooks that include additional options to push the speed limit. To me this makes a lot of sense. While applications like video streaming between PCs, notebooks , TV screens and other devices demand a lot of bandwidth, mobile devices with small screens usually require much less. Even if there are some mobile phone devices one day which act as Wifi Access Points to share a 3G or 4G Internet connection, even rudimentary Wifi speeds are still much higher than the speed of the wide area network.

Dimensions of Speed:

Like the current 802.11a and g standards, 802.11n uses Orthogonal Frequency Division Multiplexing (OFDM) on the physical layer. With this method, data is sent over many narrow band channels simultaneously. While current standards use a bandwidth of 20 MHz, 802.11n also uses 20 MHz and optionally 40 MHz. Using twice as much bandwidth in effect also doubles the speed available to users.

To further increase throughput,  802.11n uses a technique called Multiple Input Multiple Output (MIMO). MIMO uses 2, 3 or  4 antennas to send data simultaneously on the same frequency but over different spatial paths. This method is also called spacial multiplexing and exploits the fact that radio waves bounce off objects in the transmission path and thus create several independent paths from sender to receiver.

It’s interesting to note that WiMAX and the 3G Long Term Evolution (LTE) project also use MIMO to increase speed. Wireless "Pre-N" Wifi devices, however, will be the first on the using this new technology.

Backwards Compatibility

Backwards compatability is a difficult issue for 802.11n but absolutely necessary as it shares the 2.4 and 5 GHz bands with older 802.11b, g and a networks. In the 2.4 GHz frequency range 802.11n has to share the available bandwidth with 802.11b and g networks. While three non overlapping 20 MHz channels exist, only a single
40 MHz channel fits into this space. As a consequence, 40 MHz networks can not coexist at the
same place. This is quite difficult in today’s crowded Wifi environment. In the
5 GHz range, which is only used by few 802.11a devices today, things
look somewhat brighter.

In addition, new 802.11n networks must be able to handle both new 802.11n and older 802.11b, g and a devices. This is possible, but the overall speed in the network decreases. This is because of older stations not being able to use the the bandwidth as efficiently as new devices and because of additional precautions that have to be taken to prevent older stations from trampling over ongoing data transfers of new devices which they are unable to detect.

For situations in which no legacy devices have to be supported and no other networks use the same band, a "greenfield" mode is currently under discussion which throws all precautions overboard in order to increase transmission speeds. Such a mode is similar to the "g only" mode of current 802.11g access points which can be activated if no legacy devices are used.

Summary

The 802.11n standards group currently has to walk a fine line to ensure on the one hand that the standard is detailed enough, that it is designed in a way to allow power and size constrained devices to use it as well and on the other hand to finish their work as quickly as possible in order to prevent a further fractioning of the Pre-N market which has already begun. Not easy, but land is in sight.