What’s the Difference Between LTE ICIC and LTE-Advanced eICIC?

Recently, I've been looking into a couple of LTE-Advanced features and was wondering a bit what the difference is between ICIC (Inter-cell Interference Coordination) introduced for LTE in 3GPP Release 8 and eICIC introduced in 3GPP Release 10 as part of LTE-Advanced. Here's my take on it in abbreviated form, for a longer description I found a good resource here.

3GPP Release 8 LTE ICIC: This is an optional method to decrease interference between neighboring macro base stations. This is done by lowering the power of a part of the subchannels in the frequency domain which then can only be received close to the base station. These subchannels do not interfere with the same subchannels used in neighboring cells and thus, data can be sent faster on those subchannels to mobile devices close to the cell.

3GPP Release 10 LTE-Advanced eICIC: This is part of the heterogeneous network (HetNet) approach, where macro cells are complemented with pico cells inside their coverage area (hotspots in shopping centers, at airports, etc.). While the macro cells emit long range high power signals, the pico cells only emit a low power signal over short distances. To mitigate interference between a macro cell and several pico cells in its coverage area, eICIC coordinates the blanking of subframes in the time domain in the macro cell. In other words, there is no interference in those subframes from the macro cell so data transmissions can be much faster. When several pico cells are used in the coverage area of a single macro cell overall system capacity is increased as each pico cells can use the empty subframes without interference from the other pico cells. The downside is of course that the macro cell capacity is diminished as it can't use all subframes. Therefore, methods have to be put in place to quickly increase or decrease the number of subframes that are assigned for exclusive use of in pico areas when traffic patterns change.

In other words, ICIC is a macro cell interference mitigation scheme, while eICIC has been designed as part of HetNet to reduce interference between the macro and pico layer of a network (once pico cells are rolled out to increase coverage and system capacity).

Multi-Core Approaches – Qualcomm vs. Nvidia

I've recently been wondering about the different approaches taken by companies to increase the performance of the CPU part in mobile devices and decided to have a look at some whitepapers. Here's the result that you might find interesting as well:

An in increase in processing power can in a first instance be achieved by increasing the clock rate and make command execution more efficient in general. This is done by using more transistors on the chip to reduce the number of clock cycles required to execute a command and by increasing the on chip memory cache sizes to reduce the occasions the processor has to wait for data to be delivered from external and slow RAM.

Both approaches are made possible by the ever shrinking size of the transistors on the chip. While previous generations of smartphone chips used 90 nanometer structures, current high end smartphones use 45 nanometor technology and the next step to 32 and 28 nanometer structures is already in sight. When transistors get smaller, more can be fitted on the chip and power consumption at high clock rates is lowered. But there's a catch, that I'll talk about below.

Another way of increasing processing power is to have several CPUs and have the operating system assign different tasks that want to be executed simultaneous to different processor cores. When looking at Nvidia's latest Tegra design, it features 4 CPU cores so four tasks can be run in parallel. As often that is not required, the design allows to deactivate and reactivate individual cores at run-time to reduce power consumption when four cores are not necessary, which is probably most of the time. In addition, Nvidia features a 5th core that they call a "companion core" that takes over when only little processing power is needed, for example while the display is off and only low intensity background tasks have to be served. So why is a 5th core required, why can't just one of the four other cores at low clock speed take over the task. Here's were the catch comes into play that I mentioned earlier:

Total chip power consumption is governed by two influences, leakage power and dynamic power. When processors are run at high clock speeds a low voltage is required as the power requirement increases linearly with frequency but in square with the voltage. Unfortunately, optimizing the chip for low voltage operation increases the leakage power, i.e. the power consumption when voltage is applied to a transistor which always requires power to keep it's state. It is this leakage power which becomes the dominant power consumption source when the CPU is idle, i.e. when the  screen switched off, when only background tasks running, etc. And it is at this point where the Tegra's companion CPU comes in. On the die it is manufactured with a different process that is less optimized for high speeds but more optimized for low leakage power. The companion CPU can thus only be run at clock speeds up to 500 MHz but has the low power consumption advantage in idle state. Switching back and forth between the companion CPU and the four standard cores is seamless to the operating system and can be done in around 2 milliseconds.

Qualcomm has used a different approach in their latest Krait architecture to conserve power. Instead of requiring all cores to run at the same clock speed, each core can be run at a different speed depending on how much workload the operating system is requesting to the cores to be worked on. So rather than optimizing one processor for leakage power consumption, their approach to conserve power is to reduce the clock speed of individual processors when less processing power is required.

Which of the two approaches works better in practice is yet to be seen. I wouldn't be surprised though if at some point a combination of both would be used.

Who Is Doing SoCs (System on a Chip) for Smartphones?

When looking back a couple of years it wasn't all that uncommon to find the baseband modem, the application processor and the graphics processor in different chips in a smartphone. Take the Nokia N8 for example. Obviously having three chips is quite inefficient as they take up space on the smartphone board and need to be fine tuned to work with each other. Both disadvantages dissapear, however, when all three components are included in a single chip. So who is doing that today? The following manufacturers come to mind:

  • Qualcomm with their Snapdragon platform. They have everything, the modem (obviously), they have their own application processor design (Scorpion and Krait) based on ARM, and their Adreno GPU (based on assets bought from AMD a couple of years ago).
  • ST-Ericsson: Their NovaThor platform consisting of their Thor modem and Nova CPU+GPU based on an ARM-Cortex design and PowerVR graphics.
  • Nvidia: Originally coming from the graphics domain, they have enriched their portfolio with an all in one SOC with that scales up to quad-core ARM Cortex CPU with an additional CPU for low power / low processing speed operation when the display is switched off and only background tasks being services. They don't have the modem integrated at the moment but with their recent purchase of Icera, that's also only a matter of time.
  • Intel: Not quite on the market yet, but they have their modem through the purchase of Infineon and a (hopefully) low power (enough) CPU with their new Medfield Atom based design and their own graphics processor.

All others like Texas Instruments with their OMAP platform or Samsung with they Exynos are missing the modem, so they are not complete. Combinations are for example a Samung CPU + GPU chip combined with a Qualcomm modem.

Am I missing someone in the CPU+GPU+modem list?

802.11n Wi-Fi Successor: 802.11ac

Over the last couple of years, 802.11n was the Wi-Fi technology everybody was talking about. Now being including in almost any new device there's the inevitable question, what will be next? Sure, 802.11n designs can still be improved on, better antennas, better receivers and so on. But besides a successor is almost ready, 802.11ac. It looks they have run out of single letter characters for the designation but the main technical data is probably worth two characters:

  • 80 MHz and 160 MHz channels (up from 20 MHz in 11g and 40 MHz in 11n when used), 5 GHz operation only.
  • Two 80 MHz channels in different parts of the band can be bundled (to work around other users of the 5 GHz band, e.g. weather radar).
  • 8×8 MIMO (up from 4×4 MIMO in 11n, up from 2×2 used in practice today in the mainstream)
  • Multi User MIMO, so the 8×8 array can be used to send data to four 2×2 devices simultaneously, or three devices, one with 4×4 MIMO and two more with 2×2 MIMO. Other combinations are possible, too.
  • Beamforming.
  • 256QAM modulation (8 bits per transmission step, up from 64QAM in 11g and 11n).
  • Theoretical top speed when everything is combined of 6.93 Gbit/s.
  • Practical speeds perhaps 4-5 times faster than 802.11n today as most features are not mandatory but optional so they will only come over time if at all.

Here's a link with some more details by Electronics News and here, a first demo of 802.11ac single stream with an 80 MHz channel from the Wi-Fi in a Qualcomm smartphone chipset. 230 MBit/s. Not bad for a single stream transmission! And here's a link to another demo with an 802.11ac Access point by Buffalo, 80 MHz channel, 3×3 MIMO, 800 Mbit/s. Again, quite something and that's not even with a 160 MHz channel yet.

Intel and Android, Microsoft and ARM

Interesting times are ahead with major alliances forged a long time ago not really breaking up but becoming non-exclusive. Windows and Intel have been a team in the PC world for decades but have so far failed to establish themselves in mobile. But both desparately want to be in that domain. It seems they have figured out they can't do it together as a dream team, they each need to partner with an established player in mobile that helps with their established success.

Intel with Android

So we have Intel who seems to have finally been able to produce a chipset that is lean enough for a mobile phone (see here, here and here). Their acquisition of Infineon for a 2G, 3G and 4G mobile baseband also helps tremendously. By adapting Google's Android to their chipset they have a great smartphone operating system from day one and it seems that all apps that do not directly access the hardware (i.e. everything programmed in Java, i.e. pretty much all apps except games) will run on Intel based smartphones. Not bad.

Microsoft with ARM

And then there is Microsoft on the other side. They've waited for years for Intel chips to make their OS run on tablets and other gadgets but it never worked out so far. So I guess they have lost patience and have now ported Windows 8 for ARM to run on tablets. Interesting technical insights can be found here.

Intel with Windows on Mobile?

Perhaps Microsoft will consider Intel chips for their tablets again in the future should the afore mentioned Intel/Android project work out and Intel keeps churning out good mobile hardware platforms. And this Intel project, unlike the previous attempts over the past few years, looks quite promising. The advantage for Microsoft coming back to Intel is that running on an x86 architecture would remove the need to recompile Windows applications for ARM (unlike apps on Android which are always "just in time" compiled).

8-carrier HSDPA – Who Wants It, Who Could Even Use It?

3GPP Release 11 contains an interesting work item, the bundling of up to 8 x 5 MHz HSDPA channels in two different bands. Octa-carrier HSDPA with a top downlink data rate with 64QAM modulation and MIMO of 337.5 MBit/s (HSDPA category 36). Sure, the data rate is impressive but I have to wonder if it will be praticable in the real world: I can't think of any network operator who would have 8 channels available. And even if there were some, why would you want to bundle that much spectrum for HSPA when the general trend is to move to LTE anyway? Am I missing something here?

What the Main Stream Press Overlooks in the US LTE vs. Europe HSPA Discussion

This week a device was announced to the press that can supposedly use LTE networks in the US  and HSPA network in Europe and Asia. To the mainstream press things are clear, Internet access with the device will be better in the US than in Europe. Hm, they just overlooked a small detail: In the US, Verizon and AT&T currently use the 700 MHz band for LTE and each carrier only has 10 MHz of spectrum in that band (see for example Verizon's band 13). In Europe, Dual Carrier HSDPA combines 2×5 MHz to a 10 MHz channel. This nullifies pretty much all of the theoretical speed advantage. The only thing that is left that LTE has and HSPA hasn't is MIMO. Add to that the denser network structure, a more mature technology and very likely a lower power consumption due to optimized networks and things suddenly look quite different. But I guess one can argue as much as one wants, 4G must by definition be better than 3G 🙂

Half The Phones Sold In Germany In 2012 Will be Smartphones

says Fritz Joussen, CEO of Vodafone Germany and president of Bitcom, a German telecom trade association. Last year it was already one third of all phones, which shows an interesting trend. While a few years ago a smartphone sold did not automatically also mean that a data connection came with it, this might have changed in the meantime as well. After all, what's the point of having an iPhone or an Android based phone without Internet connectivity? With Symbian phones this was still a possibility and many people used such smartphones for offline purposes only. Fortunately, prices have changed as well. 10 Euros a month now connect you to the Internet with a flatrate (throttled after 300 MB) + 50 voice minutes, 9 cents for SMS and voice minutes afterwards.

LTE-Advanced CoMP needs Fiber

So far I've always assumed that LTE-Advanced Cooperative Multi Point (CoMP) transmission would be similar to what we have in UMTS for voice calls. Here, several base stations transmit a signal in the downlink direction for the mobile at the same time. The mobile device then tries to decode all signals simultaneously to improve reception conditions. With the introduction of HSPA for packet based data transmission this was no longer possible. Here the central scheduler in the central Radio Network Controler was replaced by individual packet schedulers in the base stations. As a consequence fast coordination between the schedulers of the different base stations was not possible due to the delay and limited transmission capacity of the backhaul link.

But I thought time has moved on, technology has improved and some way has been found for schedulers to communicate over the backhaul link to synchronize transmissions to a mobile device. Actually, that is not the case and the CoMP scenarios that have been studied in 3GPP TR 36.819 work quite differently. In fact, all except one scenario is based on fiber links that transmit the fully processed RF signal which is only converted from optical to an electromagnetic signal at a remote radio head. Here's a summary of the two modes and four approches discussed in the study for 3GPP Release 11:

Transmission Modes

In the Joint Processing (JP) mode, the downlink data for a mobile device transmitted from several locations simultaneously (Joint Transmission). A simpler alternative is Dynamic Point Selection (DPS) where data is also available at several locations but only sent from one location at any one time.

Another CoMP mode is Coordinated Scheduling / Beamforming (CS/CB). Here the downlink data for a mobile device is only available and transmitted from one point. The scheduling and optionally beamforming decisions are made among all cells in the CoMP set. Locations from which the transmission is performed can be changed semi-statically.

Deployment Scenarios

Scenario 1, homogeneous network intra-site CoMP: A single eNodeB base station site is usually comprised of 3 or more cells, each being responsible for a 120 degrees sector. In this scenario the eNodeB controls each of the three cell schedulers. This way it is possible to schedule a joint transmission by several cells of the eNodeB or to blank out the resource blocks in one cell that are used in another cell for a subscriber located in the area between two cells to reduce interference. This CoMP method is easy to implement as no external communication to other entities is required. At the same time this is also the major downside as there is no coordination with other eNodeBs. This means that data rates for mobile devices that are located between two cells of two different eNodeBs cannot be improved this way.

Scenario 2, high power TX remote radio heads: Due to the inevitable delay on the backhaul link between different eNodeBs its not possible to define a CoMP scheme to synchronize the scheduler. To improve on scenario 1, it was thus decided to study the use of many (9 or more) Remote Radio Heads (RRH) distributed over an area that is today covered by several independent eNodeBs. The RRHs are connected to a single eNodeB over fiber optic links that transport a fully generated RF signal that the RRH only converts from an optical into an electromagnetic signal that is then transmitted over the antenna. While this CoMP approach can coordinate transmission points in a much larger area than the first approach, its practical implementation is difficult as a fiber infrastructure must be put in place to connect the RRHs with the central eNodeB. A traditional copper based infrastructure is insufficient for this purpose due to the very high data rates required by the RF signal and the length of the cabling.

Scenario 3 and 4, heterogeneous networks: Another CoMP approach is to have several low power transmitters in the area of a macro cell to cover hotspots such as parts of buildings, different locations in shopping malls, etc.). The idea of this approach is to have a general coverage via a macro cell and offload localized traffic via local transmitters with a very limited range, reducing the interference elsewhere. This can be done in two ways. The localized transmissions could have their own cell IDs and thus act as independent cells from a mobile device point of view. From a network point of view, those cells, however would be little more than RRHs with a lower power output instead of a high power output as in scenario 2. Another option would be to use RRHs as defined above with a low power output without a separate cell ID which would make the local signal indistinguishable from the macrocell coverage for the mobile device. Again, fiber optical cabling would be required to connect the low powered transmitter to a central eNodeB.

Overall, the 3GPP CoMP study comes to the conclusion that data rates could be improved between 25 to 50% for mobiles at cell edges with neighboring interference which is a significant enhancement. Except for scenario 1, however, fiber cable installations are required which makes it unlikely that CoMP scenarios 2,3 and 4 are likely to be implemented on a broad scale in the next 5 years.

Plan B Is…

… for Plan A to work. At least that's what I read in an interview with a high ranking manager from a previously important mobile phone manufacturer recently. But I digress, I wanted to say something else. When I recently went on a long weekend in the countryside, my D100 (3G dongle to Wi-Fi adapter box) I had for many years has suddenly decided to no longer cooperate. Pretty bad when you are in places that use Swisscom Wi-Fi and you have more than a single device that wants to share your 3G Internet connection. But unlike the afore mentioned manager I did have a plan B, Wi-Fi tethering to the Android smartphone I otherwise mostly use for eBook reading and experimenting. The Wi-Fi range is probably not as good as that of the D100 but it was good enough for the hotel room and all devices I needed connectivity for worked just fine. I love it when a plan comes together, even if it's plan B. The attitude could do miracles for the above mentioned manager as well. But perhaps he's not allowed to have a plan B… But I digress again. I'm glad that something I speculated about 6 years ago in 2006 now works so well.