Are you LTE-Advanced With A 2×10 MHz Carrier Aggregation?

With LTE networks on air these days it seems that those that don't have one yet need to come up with an excuse. Not that they really have to from a technical point of view when they have a well running and optimized HSPA+ Dual Carrier network but still, LTE sounds nicer. So one of my favourite excuse is "we are waiting for LTE-Advanced" without giving more details. But what is it exactly they are waiting for?

LTE-Advanced consists of many features such as LTE CoMP I discussed a couple of days ago. I am pretty sure that's not the one feature they are waiting for to come to the market. Rather, I get the impression that they are waiting for Carrier Aggregation (CA), that allows bundling several carriers in different bands together.

So say, you have 10 MHz in one band and 10 MHz in another band and you want to bundle that together. Is that LTE-Advanced then? Sure it is, from a definition point of view. But is it better than taking a "plain old" 20 MHz channel defined in LTE Release 8 that you don't have to scrap together? And why wait for that in the first place, isn't 10 MHz good enough to start with?

So the point I am trying to make here is to listen with pointed ears when someone uses the term "LTE-Advanced" and actually ask what specifically is meant by that. Combining two 10 MHz channels doesn't count for me (even though it is technically LTE-Advanced). Having said that I can hardly wait for the press to fall into the trap and declare one country more advanced in wireless than another because an "LTE-Advanced" network (with 2x10MHz CA) has been deployed there, while other parts of the world are "lagging" behind (with networks that have 20 MHz LTE Release 8 channels) deployed.

Bah, so much double-talk.

LTE Map and Allocation Calculators

If you are in the "advanced" LTE stage (not to be mixed up with LTE-Advanced) and care about resource blocks, subframes, physical channels, control format indicator, antenna ports, HARQ indicator channels, etc. etc. and how all of that comes together, I've found two interesting links to visualize all that:

The first link is to an LTE Resource Grid calculator. After setting a all input parameters such as the channel size (1.4 to 20 MHz), number of symbols used for the downlink control channel, etc. the resource grid is visualized with the different physical channels marked in different colors. Great stuff, finally an easy way to transform all those formulas in the spec to an easy to understand map and see how changing the input parameters change to channel map. Also, the map is a great way to understand how much of the channel is used for control information and thus overhead, and how much is used for actual user data.

The second link is an LTE Physical Downlink Shared Channel Allocation Calculator. Given the channel bandwidth, control format indicator, modulation type, the number of resource blocks assigned to a device and a couple of other input parameters and the calculator will come up with the number of bits that are transmitted per slot and subframe (1ms) to a device. Again, it's interesting to play around with the input parameters and see how the result changes in real time.

Have fun

CDMA / LTE Dual-Radio with a Single Baseband Chip

LTE has a bit of a problem with voice and a number of different approaches exist to sail around this for the moment. While some network operators might have an inclination towards CS-Fallback (CSFB) to GSM and UMTS, others like Verizon have gone the dual radio approach, i.e. having two radios active at the same time, one for CDMA-1x and one for LTE. An example is the HTC Thunderbolt, that has two radio chips inside. For CDMA it uses a Qualcomm MSM-8655 and for LTE it uses a MDM-9600. For details see here. But it seems to two chip approach might not be necessary for much longer. In this whitepaper, Qualcomm states that "for LTE handsets, the 8960 modem enables […] simultaneous CDMA voice and LTE data (SVLTE [Simultaneous Voice and LTE])". That certainly fixes issue requiring two baseband chips required in a CDMA/LTE smartphone. A potential solution for the GSM/LTE world as well?

German DSL and LTE on a Coverage Map

Here's a link to an interesting map (Breitbandatlas) on the website of the German Department of Commerce on where in the country high speed Internet access is available at speeds of >= 1 Mbit/s. The map is an overlay of fixed line DSL availability with HSPA and LTE coverage. The map is split into tiny cells and for each cell the networks are listed that are available at that location. The result is 99.5% population coverage.

A very good value but it should be noted that for those covered by HSPA and LTE, there's a volume limit per month, typically between 5 and 30 GB depending on the price. That's quite enough for most people and includes occasional Youtube use. Don't forget though to tell your kids about the limit, too 🙂

What’s the Difference Between LTE ICIC and LTE-Advanced eICIC?

Recently, I've been looking into a couple of LTE-Advanced features and was wondering a bit what the difference is between ICIC (Inter-cell Interference Coordination) introduced for LTE in 3GPP Release 8 and eICIC introduced in 3GPP Release 10 as part of LTE-Advanced. Here's my take on it in abbreviated form, for a longer description I found a good resource here.

3GPP Release 8 LTE ICIC: This is an optional method to decrease interference between neighboring macro base stations. This is done by lowering the power of a part of the subchannels in the frequency domain which then can only be received close to the base station. These subchannels do not interfere with the same subchannels used in neighboring cells and thus, data can be sent faster on those subchannels to mobile devices close to the cell.

3GPP Release 10 LTE-Advanced eICIC: This is part of the heterogeneous network (HetNet) approach, where macro cells are complemented with pico cells inside their coverage area (hotspots in shopping centers, at airports, etc.). While the macro cells emit long range high power signals, the pico cells only emit a low power signal over short distances. To mitigate interference between a macro cell and several pico cells in its coverage area, eICIC coordinates the blanking of subframes in the time domain in the macro cell. In other words, there is no interference in those subframes from the macro cell so data transmissions can be much faster. When several pico cells are used in the coverage area of a single macro cell overall system capacity is increased as each pico cells can use the empty subframes without interference from the other pico cells. The downside is of course that the macro cell capacity is diminished as it can't use all subframes. Therefore, methods have to be put in place to quickly increase or decrease the number of subframes that are assigned for exclusive use of in pico areas when traffic patterns change.

In other words, ICIC is a macro cell interference mitigation scheme, while eICIC has been designed as part of HetNet to reduce interference between the macro and pico layer of a network (once pico cells are rolled out to increase coverage and system capacity).

Multi-Core Approaches – Qualcomm vs. Nvidia

I've recently been wondering about the different approaches taken by companies to increase the performance of the CPU part in mobile devices and decided to have a look at some whitepapers. Here's the result that you might find interesting as well:

An in increase in processing power can in a first instance be achieved by increasing the clock rate and make command execution more efficient in general. This is done by using more transistors on the chip to reduce the number of clock cycles required to execute a command and by increasing the on chip memory cache sizes to reduce the occasions the processor has to wait for data to be delivered from external and slow RAM.

Both approaches are made possible by the ever shrinking size of the transistors on the chip. While previous generations of smartphone chips used 90 nanometer structures, current high end smartphones use 45 nanometor technology and the next step to 32 and 28 nanometer structures is already in sight. When transistors get smaller, more can be fitted on the chip and power consumption at high clock rates is lowered. But there's a catch, that I'll talk about below.

Another way of increasing processing power is to have several CPUs and have the operating system assign different tasks that want to be executed simultaneous to different processor cores. When looking at Nvidia's latest Tegra design, it features 4 CPU cores so four tasks can be run in parallel. As often that is not required, the design allows to deactivate and reactivate individual cores at run-time to reduce power consumption when four cores are not necessary, which is probably most of the time. In addition, Nvidia features a 5th core that they call a "companion core" that takes over when only little processing power is needed, for example while the display is off and only low intensity background tasks have to be served. So why is a 5th core required, why can't just one of the four other cores at low clock speed take over the task. Here's were the catch comes into play that I mentioned earlier:

Total chip power consumption is governed by two influences, leakage power and dynamic power. When processors are run at high clock speeds a low voltage is required as the power requirement increases linearly with frequency but in square with the voltage. Unfortunately, optimizing the chip for low voltage operation increases the leakage power, i.e. the power consumption when voltage is applied to a transistor which always requires power to keep it's state. It is this leakage power which becomes the dominant power consumption source when the CPU is idle, i.e. when the  screen switched off, when only background tasks running, etc. And it is at this point where the Tegra's companion CPU comes in. On the die it is manufactured with a different process that is less optimized for high speeds but more optimized for low leakage power. The companion CPU can thus only be run at clock speeds up to 500 MHz but has the low power consumption advantage in idle state. Switching back and forth between the companion CPU and the four standard cores is seamless to the operating system and can be done in around 2 milliseconds.

Qualcomm has used a different approach in their latest Krait architecture to conserve power. Instead of requiring all cores to run at the same clock speed, each core can be run at a different speed depending on how much workload the operating system is requesting to the cores to be worked on. So rather than optimizing one processor for leakage power consumption, their approach to conserve power is to reduce the clock speed of individual processors when less processing power is required.

Which of the two approaches works better in practice is yet to be seen. I wouldn't be surprised though if at some point a combination of both would be used.

Who Is Doing SoCs (System on a Chip) for Smartphones?

When looking back a couple of years it wasn't all that uncommon to find the baseband modem, the application processor and the graphics processor in different chips in a smartphone. Take the Nokia N8 for example. Obviously having three chips is quite inefficient as they take up space on the smartphone board and need to be fine tuned to work with each other. Both disadvantages dissapear, however, when all three components are included in a single chip. So who is doing that today? The following manufacturers come to mind:

  • Qualcomm with their Snapdragon platform. They have everything, the modem (obviously), they have their own application processor design (Scorpion and Krait) based on ARM, and their Adreno GPU (based on assets bought from AMD a couple of years ago).
  • ST-Ericsson: Their NovaThor platform consisting of their Thor modem and Nova CPU+GPU based on an ARM-Cortex design and PowerVR graphics.
  • Nvidia: Originally coming from the graphics domain, they have enriched their portfolio with an all in one SOC with that scales up to quad-core ARM Cortex CPU with an additional CPU for low power / low processing speed operation when the display is switched off and only background tasks being services. They don't have the modem integrated at the moment but with their recent purchase of Icera, that's also only a matter of time.
  • Intel: Not quite on the market yet, but they have their modem through the purchase of Infineon and a (hopefully) low power (enough) CPU with their new Medfield Atom based design and their own graphics processor.

All others like Texas Instruments with their OMAP platform or Samsung with they Exynos are missing the modem, so they are not complete. Combinations are for example a Samung CPU + GPU chip combined with a Qualcomm modem.

Am I missing someone in the CPU+GPU+modem list?

802.11n Wi-Fi Successor: 802.11ac

Over the last couple of years, 802.11n was the Wi-Fi technology everybody was talking about. Now being including in almost any new device there's the inevitable question, what will be next? Sure, 802.11n designs can still be improved on, better antennas, better receivers and so on. But besides a successor is almost ready, 802.11ac. It looks they have run out of single letter characters for the designation but the main technical data is probably worth two characters:

  • 80 MHz and 160 MHz channels (up from 20 MHz in 11g and 40 MHz in 11n when used), 5 GHz operation only.
  • Two 80 MHz channels in different parts of the band can be bundled (to work around other users of the 5 GHz band, e.g. weather radar).
  • 8×8 MIMO (up from 4×4 MIMO in 11n, up from 2×2 used in practice today in the mainstream)
  • Multi User MIMO, so the 8×8 array can be used to send data to four 2×2 devices simultaneously, or three devices, one with 4×4 MIMO and two more with 2×2 MIMO. Other combinations are possible, too.
  • Beamforming.
  • 256QAM modulation (8 bits per transmission step, up from 64QAM in 11g and 11n).
  • Theoretical top speed when everything is combined of 6.93 Gbit/s.
  • Practical speeds perhaps 4-5 times faster than 802.11n today as most features are not mandatory but optional so they will only come over time if at all.

Here's a link with some more details by Electronics News and here, a first demo of 802.11ac single stream with an 80 MHz channel from the Wi-Fi in a Qualcomm smartphone chipset. 230 MBit/s. Not bad for a single stream transmission! And here's a link to another demo with an 802.11ac Access point by Buffalo, 80 MHz channel, 3×3 MIMO, 800 Mbit/s. Again, quite something and that's not even with a 160 MHz channel yet.

Intel and Android, Microsoft and ARM

Interesting times are ahead with major alliances forged a long time ago not really breaking up but becoming non-exclusive. Windows and Intel have been a team in the PC world for decades but have so far failed to establish themselves in mobile. But both desparately want to be in that domain. It seems they have figured out they can't do it together as a dream team, they each need to partner with an established player in mobile that helps with their established success.

Intel with Android

So we have Intel who seems to have finally been able to produce a chipset that is lean enough for a mobile phone (see here, here and here). Their acquisition of Infineon for a 2G, 3G and 4G mobile baseband also helps tremendously. By adapting Google's Android to their chipset they have a great smartphone operating system from day one and it seems that all apps that do not directly access the hardware (i.e. everything programmed in Java, i.e. pretty much all apps except games) will run on Intel based smartphones. Not bad.

Microsoft with ARM

And then there is Microsoft on the other side. They've waited for years for Intel chips to make their OS run on tablets and other gadgets but it never worked out so far. So I guess they have lost patience and have now ported Windows 8 for ARM to run on tablets. Interesting technical insights can be found here.

Intel with Windows on Mobile?

Perhaps Microsoft will consider Intel chips for their tablets again in the future should the afore mentioned Intel/Android project work out and Intel keeps churning out good mobile hardware platforms. And this Intel project, unlike the previous attempts over the past few years, looks quite promising. The advantage for Microsoft coming back to Intel is that running on an x86 architecture would remove the need to recompile Windows applications for ARM (unlike apps on Android which are always "just in time" compiled).

8-carrier HSDPA – Who Wants It, Who Could Even Use It?

3GPP Release 11 contains an interesting work item, the bundling of up to 8 x 5 MHz HSDPA channels in two different bands. Octa-carrier HSDPA with a top downlink data rate with 64QAM modulation and MIMO of 337.5 MBit/s (HSDPA category 36). Sure, the data rate is impressive but I have to wonder if it will be praticable in the real world: I can't think of any network operator who would have 8 channels available. And even if there were some, why would you want to bundle that much spectrum for HSPA when the general trend is to move to LTE anyway? Am I missing something here?