The Downside for Verizon of picking LTE

It’s been THE news of the week for the wireless industry that Verizon has selected to go for LTE as their next generation network rather than UMB, the successor technology of their current CDMA1x EvDO network. I put down my initial thoughts on the deal here. In the meantime there are two additional important points which came to my mind: Multimode terminals and backwards compatibility!

UMTS operators that are upgrading to LTE will have a smooth migration path especially since mobile devices are likely to be GSM/UMTS/HSDPA/LTE compatible. LTE makes this especially easy since the air interface has been designed to be able reuse oscillators etc. from HSDPA. Also the software stack on higher layers will probably be partly reusable as I expect that high level (NAS) signaling will be similar.

CDMA operators such as Verizon will have a much more difficult story to tell their subscribers. I kind of doubt that there will be CDMA/LTE mobile devices since there won’t be many operators taking this path. Also from the core network point of view LTE won’t be able to interconnect with a CDMA network as easily as with a UMTS network. For UMTS, the LTE specification already contains all information of how to do handovers back and forth between the two worlds.

A small comfort for Verizon: Sprint will have a similar experience moving from CDMA to WiMAX…

Verizon and LTE: All Over IP Is Shaking Up The Wireless World

Recent reports (here and here) that Verizon has chosen LTE as a successor technology of its current CDMA 1xEVDO Rev A. instead of UMB is likely to be a big blow for Qualcom and the CDMA industry as a whole. While the other big CDMA network operator Sprint has decided to go for WiMAX and a lot of global CDMA operators have already jumped ship and went to UMTS/HSDPA, Verizon is the latest addition to the list.

UMB, LTE and WiMAX are all ‘IP only’ technologies that strictly separate the wireless network from the applications running above. This is not only beneficial for users (as discussed here) but also allows network operators to jump ship when going to the next technology. Just as in the case of Verizon and Sprint. No UMTS operators have so far shown their interest to do the same, except for the threats of Vodafone that the LTE timeline is too slow for them and that they are looking what WiMAX can do for them. Might the tight integration of LTE into the already existing 2G/3G GSM/UMTS ecosystem keep operators at bay?

So while UMB is not dead yet, the hill they have to climb just got a lot steeper.

If WiMAX Becomes a 3G (IMT-2000) Standard, What’s Left for 4G?

Now that 3G systems such as UMTS are under full deployment, the industry is looking forward to what comes next. While some say that WiMAX is a 4G system, the IEEE and the WiMAX forum think that 802.16e is rather a 3G technology and have asked the ITU (International Telecommunication Union) to include this standard into its IMT-2000 specification (International Mobile Telecommunications 2000). This specification is generally accepted as being the umbrella defining which standards are to be considered 3G.

This is mainly a political move since in many regions of the world, frequencies are reserved for 3G IMT-2000 systems. If WiMAX were included in IMT-2000, and it looks like it will be in the near future, some frequency bands such as the 2.5 GHz IMT-2000 extension band in Europe could be used for WiMAX without changing policies.

So what remains for IMT-Advanced, the ITU umbrella name for future 4G technologies?

Currently there is still no no clear definition by ITU of the characteristics of future 4G IMT-Advanced systems. The ITU-R M.1645 recommendation gives first hints but leaves the door wide open:

It is predicted that potential new radio interface(s) will need to support data rates of up to approximately 100 Mbit/s for high mobility such as mobile access and up to approximately 1 Gbit/s for low mobility such as nomadic/local wireless access, by around the year 2010 […]
These data rate figures and the relationship to the degree of mobility (Fig. 2) should be seen as targets for research and investigation of the basic technologies necessary to implement the framework. Future system specifications and designs will be based on the results of the research and investigations.

When WiMAX is compared to the potential requirements above it’s quite clear that the current 802.16e standard would not qualify as a 4G IMT-Advanced standard since data rates even under ideal conditions are much lower.

3GPP’s Long Term Evolution (LTE) project will also have difficulties fulfilling these requirements. Even with the recently proposed 4×4 MIMO, data rates in a 20 MHz carrier would not exceed 326 MBit/s. And that’s already a long stretch since putting 4 antennas in a small device or on a rooftop will be far from simple in practice. If WiMAX is accepted as a 3G IMT-2000 technology, how can LTE with a similar performance be accepted as a 4G IMT-Advanced technology?

Additionally, one should also not forget that IMT-2000 systems such as UMTS are still evolving. UMTS is a good example. With HSDPA and HSUPA, user speeds now exceed the 2 MBit/s which were initially foreseen for IMT-2000 systems. But development hasn’t stopped here. Recent new developments in 3GPP Release 7 and 8 called HSPA+, which will include MIMO technology and other enhancements, will bring the evolved UMTS technology to the same capacity levels as what is currently predicted for LTE on a 5 MHz carrier. HSPA+ is clearly not a 4G IMT-Advanced system since it enhances a current 3G IMT-2000 radio technology. Thus, HSPA+ categorized as a ‘enhanced IMT-2000 system’.

Maybe that’s the reason why the IEEE 802.16 working group is already looking forward and has started work on 802.16m with the stated goal of reaching top speeds of 1 GBit/s.

When looking at current research it’s clear that the transmission speed requirements described in ITU-R M.1645 can only be achieved in a frequency band of 100+ MHz. This is quite a challenge since such large bands are few. Thus, I have my doubts whether these requirements will remain in place for the final definition of 4G IMT-Advanced.

Does It Really Matter If A Technology Is 3.5G, 3.9G or 4G?

While discussions are ongoing the best one can do is to look at HSPA+, WiMAX, LTE and other future developments as "Beyond 3G" systems. After all, from a user point of view it doesn’t  matter if a technology is IMT-2000, Enhanced IMT-2000 or IMT-Advanced as long as data rate, coverage and other attributes of the network can keep up with the growing data traffic.

A whitepaper produced by 3G Americas has some further thoughts on the topic.

As always, comments are welcome!

How Do You Hand Over A 4G Voice Call to 2G?

WiMAX, LTE, UMB, etc. etc., buzz words in the emerging 4G wireless space. Different interests, standardization groups and politics but they all have one thing in common: All are based on IP and all will rely on Voice over IP (VoIP) in one form or another (e.g. IMS or SIP) to carry voice calls. With sheer bandwidth, IP header compression and optimized handover strategies between cells I can imagine it happening. But what happens when you run out of network coverage and only a GSM network is available to continue the call in?

A number of alternatives exist. The first one might be evolved EDGE which could deliver GPRS data rates high enough to sustain a VoIP call begun in a 4G network on the packet switched side of the network. However, I wouldn’t bet on this one happening everywhere. It’s more likely that the VoIP call must be continued in the circuit switched side of the GSM network. But how can that be done?

Voice Call Continuity (VCC) could come to the rescue. A first version is already standardized in 3GPP TS 23.206 and it can do this and many other interesting things. I’ve done a short intro on VCC before, take a look here. Yes, it’s standardized but it’s not a home run:

One of the problems with VCC is that the mobile needs to be connected to both the 4G network and the GSM network at the same time to perform a handover. This consumes more energy then only being connected to one network at a time. Furthermore, such a dual connection might be difficult to establish if the two networks use the same frequency band. If the 4G network is deployed in the 2.5 or 3.5 GHz band then this is not going to be a problem. In case classic 2G frequency bands (850, 900, 1800, 1900 MHz) are partly re-farmed and the GSM network to be handed over to is nearby then VCC will become a challenge. 3GPP Release 8 might yet get a work item to study the possibility of single radio VCC (SR-VCC) to deal with these issues and I am looking forward to see how handover speeds in the order of a few hundred milliseconds can be achieved.


All-IP wireless networks will be a great thing to have but solving the handover to legacy wireless networks to prevent calls from dropping is going to be a difficult thing.

WiMAX II – 802.16m – Chasing the Ghost

Looking at presentations from a recent LTE meeting I found it quite interesting at how many of them mention WiMAX 802.16m. I haven’t heard much about 802.16m yet but since they all refer to it I thought it might be time to find out a bit more about it.

It seems to be a bit early for that search however. First announced in early 2007 the only facts so far known about 802.16m is that the IEEE would like to create a standard as much backwards compatible as possible to the current version of the WiMAX (802.16e or 820.16-2005) but with peak data rates of up to 1 GBit/s (that’s around 1.000 MBit/s).

Compared to current systems deployed in live networks today such as HSDPA with a theoretical top speed of 14 MBit/s and about 2 MBit/s with a Cat-6 HSDPA mobile today in live networks, these numbers are staggeringly impressive. So how can such data rates be achieved? As not much is known so far, let’s speculate a bit.

Between today and WiMAX II, there’s systems such as WiMAX and LTE which promise faster data rates than those available today by mainly doing the following:

  • Increase the channel bandwidth: HSDPA uses a 5 MHz channel today. WiMAX and LTE have flexible channel bandwidths from 1.25 to 20 MHz (Note: The fastest WiMAX profile currently only uses a 10 MHz channel today for the simple reason that 20 MHz of spectrum is hard to come by). So by using a channel that is four times as broad as today, data rates can be increased four times.
  • Multiple Input, Multiple Output (MIMO): Here, multiple antennas at both the transmitting and receiving end are used to send independent data streams over each antenna. This is possible as signals bounce of buildings, trees and other obstacles and thus form independent data paths. Both LTE and WiMAX currently foresee 2 transmitting and 2 receiving antennas (2×2 Mimo). In the best case this doubles data rates.
  • Higher Order Modulation: While HSDPA uses 16QAM modulation that packs 4 bits into a single transmission step, WiMAX and LTE will use 64QAM modulation under ideal transmission conditions which packs 6 bits into a single transmission step.

By using the techniques above, LTE and WIMAX will be able to increase today’s 2 MBit/s to about 20-25 MBit/s. That’s still far away from the envisaged 1.000 GBit/s. To see how to get there let’s take a look at what NTT DoCoMo is doing in their research labs, as they have already achieved 5 GBit/s on the air interface and have been a bit more open at what they are doing (see here and especially here):

  • Again increase of the channel bandwidth: They use a 100 MHz channel for their system. That’s 4 times wider than the biggest channel bandwidth foreseen for LTE and 20 times wider than used for today’s HSDPA. Note that in practice it might be quite difficult to find such large channels in the already congested radio bands.
  • 12×12 MIMO: Instead of 2 transmit and receive antennas, DoCoMo uses 12 for their experiments. Current designers of mobile devices already have a lot of trouble finding space for 2 antennas so a 12×12 system should be a bit tricky to put into small devices.
  • A new modulation scheme: VSF spread OFDM. This one’s a bit mind bogelling using CDMA and OFDM in combination. Wikipedia contains a description of something called VSF-OFCDM which might be a close brother.

A four times wider bandwidth with six times the number of antennas results in a speed increase factor of 24. So multiplying 25 MBit/s * 24 results in 600 MBit/s or 0.6 GBit/s. That’s still a factor of 8 away from what DoCoMo has said they have achieved, so I wonder where that discrepancy comes from!? I guess only time will tell.


For the moment, the wireless world’s pretty much occupied with making LTE and WiMAX a reality. Pushing beyond that is not going to be an easy thing to do in the real world as bands that allow a single carrier of  100 MHz will be even harder to find than for the 20 MHz envisaged for LTE. Also, cramming more than 2 antennas into a small device will also be a formidable challenge.

More about 4G, LTE and WiMAX can be found here.

Broadband Was Yesteray – Wideband’s The Future

HSDPA, EVDO, WIMAX, LTE, you name them, they all go advertising these days with "mobile can now do broadband, too". I think this is true to a certain extent, if one keeps in mind that overall capacity that can be delivered by a mobile system in a densely populated area can not match capacity of DSL or cable. But that ‘s not the aim, anyway. However, DSL and cable have already moved on.

I just listened to a tech show on C-Span 2 where the CEO of Comcast introduced his new Docsis 3.0 modems of Arris that can do 120 MBit/s for a single subscriber. Sure, that bandwidth usually has to be shared with other households on the same coax cable. Nevertheless, the speed has already moved far beyond of what we know as broadband today. DSL has also not stood still and projects like in Paris (Fiber to the Curb, Fiber to the Home) have begun to offer similar speeds over telephone cable these days. And it certainly doesn’t stop here. According to the Paris section of this Wikipedia entry, GBit connections to homes are already in the trail phase.

So who should this next generation of broadband be called? Though thing… Comcast has decided to call it Wideband. Bad choice in my eyes, since in wireless, the term wideband is already used in 3G (Wideband-CDMA, W-CDMA)… But agreed, it’s certainly better than to call it Ultra Mega Broadband 😉

So trying to sell 3.5G and 4G networks around the "mobile can now do mobile broadband, too" slogan will not work much longer anymore. Lucky are those operators who have both fixed and wireless assets and make good use of both by combining them.

An Introduction To SC-FDMA Used By LTE In Uplink Direction

Both WiMAX and the UMTS successor technology LTE use Orthogonal Frequency Division Multiplexing (OFDM) as the core modulation technology on the air interface in downlink direction. In uplink direction, however, the two systems go different ways. While WiMAX uses OFDMA (Orthogonal Frequency Division Multiple Access), the 3GPP (3rd Generation Partnership Project) standardization group has decided to use SC-FDMA (Single Carrier Frequency Division Multiple Access) instead.

In essence, SC-FDMA builds on OFDMA so the two systems are not as different as it seems at first. In addition, the abbreviation SC-FDMA is quite misleading as the technology, like OFDMA, also uses many sub-carriers on the air interface. To explain how SC-FDMA works it’s best to first take a look at OFDMA (used by WiMAX) and then discuss the differences to SC-FDMA.


OFDMA transmits a data stream by using several narrow band sub-carriers simultaneously, e.g. 512, 1024 or even more depending on the overall available bandwidth (e.g. 5, 10, 20 MHz) of the channel. As many bits are transported in parallel, the transmission speed on each sub carrier can be much lower than the overall resulting data rate. This is important in a practical radio environment in order to minimize effect of multipath fading created by slightly different arrival times of the signal from different directions.

As shown in the first figure on the left the input bits are first grouped and assigned for transmission over different frequencies (sub-carriers). In the example, 4 bits (representing a 16QAM modulation) are using for constructing each sub-carrier. In theory, each sub-carrier signal could be generated by a separate transmission chain hardware block. The output of these blocks would then have to be summed up and the resulting signal could then be sent over the air. Due to the high number of sub-carriers used this approach is not practicable. Instead, a mathematical approach is taken: As each sub-carrier is transmitted on a different frequency a graph which shows the frequency on the x-axis and the amplitude of each sub-carrier on the y-axis can be constructed. Then, a mathematical functional called Inverse Fast Fourier Transformation (IFFT) is applied which transforms the diagram from the frequency domain to time domain. This diagram has the time on the x-axis and represents the same signal as would have been generated by the separate transmission chains for each sub-carrier when summed up. The IFFT thus does exactly the same as the separate transmission chains for each sub carrier would do including summing up the individual results.

On the receiver side the signal is first demodulated and amplified. The result is then treated by a Fast Fourier Transformation function which converts the time signal back into the frequency domain. This reconstructs the frequency/amplitude diagram created at the transmitter. At the center frequency of each sub-carrier a detector function is then used to generate the bits which were originally used to create the sub-carrier.


Despite its name, Single Carrier Frequency Division Multiple Access (SC-FDMA) also transmits data over the air interface in many sub-carriers but adds an additional processing step as shown in the second figure. Instead of putting 4 bits together as in the OFDM example to form the signal for one sub-carrier, the additional processing block in SC-FDMA spreads the information of each bit over all the sub-carriers. This is done as follows: Again, a number of bits (e.g. 4 representing a 16 QAM modulation) are grouped together. In OFDM, these groups of bits would have been the input of the IDFT. In SC-FDMA, however, these bits are now piped into a Fast Fourier Transformation (FFT) function first. The output of the process is the basis for the creation of the sub-carriers for the following IFFT. As not all sub-carriers are used by the mobile station, many of them are set to zero in the diagram. These may or may not be used by other mobile stations.

On the receiver side the signal is demodulated, amplified and treated by the Fast Fourier Transformation function in the same way as in OFDMA. The resulting amplitude diagram, however, is now not analyzed straight away to get the original data stream but fed to the Inverse Fast Fourier Transformation function to remove the effect of the additional signal processing originally done at the transmitter side. The result of the IFFT is again a time domain signal. The time domain signal is now fed to a single detector block which recreates the original bits. Thus, instead of detecting the bits on many different sub-carriers, only a single detector is used on a single carrier.

Summary of the difference between OFDM and SC-FDMA:

OFDM takes groups of input bits (0’s and 1’s) to assemble the sub-carriers which are then processed by the IDFT to get a time signal. SC-FDMA in contrast first runs an FFT over the groups of input bits to spread them over all sub-carriers and then uses the result for the IDFT which creates the time signal. This is why SC-FDMA is sometimes also referred to as FFT spread OFDM.

While SC-FDMA adds additional complexity at both the transmitter and receiver side, the 3GPP standardization body has nevertheless decided for it as treating signal this way reduces the Peak to Average Power Ratio (PAPR). This is important to lower the power consumption of mobile devices. More details on PAPR can be found here.

The PAPR Problem

I’ve happened to stumble over PAPR (Peak to Average Power Ratio) quite a lot lately as it seems to play a big role in WiMAX and 3GPP LTE mobile devices. Most papers mention that LTE has a better PAPR than WiMAX but fail to explain what it is and why this is so important. After some research and help from a number of experts here’s an intro to PAPR:

When transmitting data from the mobile terminal to the network, a power amplifier is required to boost the outgoing signal to a level high enough to be picked up by the network. The power amplifier is one of the biggest consumers of energy in a device and should thus be as power efficient as possible to increase the operation time of the device on a battery charge. The efficiency of a power amplifier depends on two factors:

  • The amplifier must be able to amplify the highest peak value of the wave. Due to silicon constraints, the peak value decides over the power consumption of the amplifier.
  • The peaks of the wave however do not transport any more information than the average power of the signal over time. The transmission speed therefore doesn’t depend on the peak power output required for the peak values of the wave but rather on the average power level.

As both power consumption and transmission speed are of importance for designers of mobile devices the power amplifier should consume as little energy as possible. Thus, the lower the difference between the peak power to the average power (PAPR) the longer is the operating time of a mobile device at a certain transmission speed compared to devices that use a modulation schemes with a higher PAPR.

Now let’s come back to the beginning of this blog entry in which I said that papers generally say that LTE has a better PAPR than WiMAX. This is because of different modulation schemes used in the uplink. While WiMAX uses OFDMA (Orthogonal Frequency Division Multiple Access) which is fast but has a high PAPR, LTE designers choose to use SC-FDMA (Single Carrier – Frequency Division Multiple Access) which is as fast but is said to have a better PAPR. So what’s OFDMA and SC-FDMA? Well, that’s for another blog entry.

The Timing Advance Is Back with LTE and WiMAX

In the high times of GSM, mobile enthusiasts equipped with mobile phones with an engineering menu had a lot of fun finding base stations by taking a closer look at the timing advance parameter. This parameter implicitly contains the distance to the base station the mobile currently communicates with. A GSM mobile requires this parameter as it has to start sending data in it’s timeslot earlier the farther it is away from the base station. This is necessary as radio waves only travel at the speed of light. If no adjustment is made, transmissions of a far away mobile tramples over transmissions in the next time slot of another mobile as they would arrive too late.

With UMTS things got a bit difficult as due to the CDMA approach of the radio interface a timing advance parameter was not necessary anymore. Unfortunately this makes finding specific UMTS base stations quite difficult. But don’t despair, LTE and WiMAX will require a timing advance parameter again since these systems are based on OFDMA and timeslots. This means that the network has to send timing advance information to the mobiles again to ensure their data always arrives at the instant it is supposed to. So network tracking should get easier again in the future!

Collaborative MIMO for WiMAX and LTE

In two previous blog entries I focused on the limited uplink power of mobile stations and how WiMAX, UMTS/HSDPA and LTE overcome this hurdle by allowing several mobiles to transmit simultaneously. In the future, however, limited transmission power might not be the only limitation.

WiMAX and LTE will probably both use a technology called MIMO (Multiple Input / Multiple Output) which makes use of multiple antennas at both the transmitter and the receiver to transmit independent data streams on the same frequency via different directions. Especially small hand held devices, however, might not be equipped with several antennas due to their small size or due to the additional cost incurred. Thus, they can not make use of MIMO. This reduces both their own speed as well as the overall speed of the network.

The solution to this problem is called "uplink collaborative MIMO" or multi user MIMO (MU-MIMO). Here, the network can instruct, for example, two mobiles to transmit simultaneously, each on an independent MIMO path. Even though both signals are sent on the same frequency, a MIMO capable base station will still be able to pick up the signals independently from each other if the main energy of each signal arrives from a different direction. This in effect creates a MIMO channel, just that the two or more antennas do not belong to one terminal but to several. Interesting approach!

From what I can read in the press, only Nortel has so far picked up on this and has stated that it will implement collaborative MIMO in the uplink direction for both WiMAX (here and here) and LTE (here).