What’s Coming At You Without NAT?

So far, I've mostly been looking at Network Address Translation (NAT) as a good counter measure on mobile devices to block unsolicited incoming communication so the modem doesn't have to wake up all the time. Another benefit of NAT is of course also to keep the bad guys away from your devices on the network layer. But actually how much unsolicited traffic is there that reduces battery life in mobile devices and puts your device or local network at risk? As I didn't have any specific numbers on that I decided to try it out to see what happens.

I ran my tests with a Linux PC connected to the Internet and running Wireshark in various ways. In one setup, I used my DSL line. On the router I assigned the Linux PC as a DMZ host, i.e. all unknown incoming packets were forwarded to it. Needless to say that the PC had all current security patches applied and only had the services running that were really require. In another setup I used a 3G dongle and an APN without NAT. The rest of the world didn't really care if the link was fixed or wireless the incoming unsolicited traffic was pretty much the same. Therefore, I don't distinguish between the two in the following.

And here is what happened:

Incoming Traffic Frequency: There was incoming traffic not generated by any of my running applications every 5-10 minutes, i.e. around 10 connection requests per hour, or 10 additional and unnecessary modem wakeup calls.

Type of Incoming Traffic:

Some of the incoming traffic could easily be identified as P2P file sharing connection requests, most likely triggered by a P2P client running on a device that had the IP address I was assigned previously. No harm done here.

Most connection requests had a less harmless nature, definitely sent to see if services are running that could potentially be exploited. Here are some interesting highlights detected during my 6 hour experiment:

  • Frequent connection requests to telnet, ssh and http ports. I ran the tests with several different dynamic IP addresses assigned and always got those requests from many different sources. Definitely probes to see if old and outdated services were running that could be exploited.
  • Unsolicited SIP requests: I saw those from a number of different originations, so people are running SIP scanners out there to see if VoIP servers are running on systems out there.
  • Active VNC attack: I had one instance where the VNC port was probed. As I had a VNC server running on that system the other end started the handshake dialoge and logged off once he had my server version string. I checked with a real VNC client and even when I don't type in the password the communication goes much further than what I saw in this even. There are some VNC server flavours out there that are vulnerable so that was most likely an active attack to scan for those out there.
  • Microsofts Remote Desktop Port: I also saw a number of RDP connection requests, so even before the recent criticial security patch for a remote code execution vulnerability, automated scans were running against this port.
  • Microsoft SQL database weakness probe
  • Unsolicited DNS responses: Every now and then I got DNS response packets which were not triggered by internal DNS queries. The responses contained URLs for xxx sites. I haven't quite understood the background behind that
  • Port Scans: General port scans not from P2P services to well known port numbers, e.g. 110 POP3, etc. 

I ran the test with several different IP addresses on different days to ensure I didn't have an IP address that was used by someone else before and thus triggering certain things. The result in each case was the same so all things described above pretty much must be from automated scripts just running up and down the IP address space looking for targets. Also interesting are the countries of origin of those requests. It's pretty much an international phenomenon, requests were coming from everywhere, including the US, European Countries, Russia, China, Australia, etc. etc.

Not a peaceful world out there…

I’ve Switched to 3G-only mode

Only a few years ago, the first thing many people did when buying a new 3G phone was to switch to 2G-only mode as they felt it would reduce power consumption. Whether that had an effect or not, that's what they did. Times have changed and today smartphone users leave their device in 2G/3G mode because connected data apps (web browsing, email, instant messaging, etc. etc.) have become an integral part of the experience. I have now gone even further and have switched to 3G-only mode as UMTS coverage has become almost as ubiquitous as 2G in where I live (Cologne-Bonn area). And here's why:

  • HD-voice: Quite a number of my friends have HD-voice capable phones now with superior voice quality. For the moment, that's only available on UMTS in practice so I don't want my phone to be handed over to 2G and thus be kicked back to the traditional narrow-band voice codec.
  • Simultaneous voice and data: Especially during longer conference calls I take on the mobile phone I like to be able to switch to the web browser or the email client to do some background research. UMTS had the simultaneous voice and data connectivity already back in day 1 and I've become used to it and don't want to be thrown to 2G during the voice call and my data applications to stop working. 
  • Security: Yes, this one's perhaps a bit on the paranoid side but GSM is not as uncrackable anymore as it used to be. Better to be on the 3G side.

Admittedly, I switch back to dual-mode 2G/3G when I travel as 3G coverage is not as ubiquitous as in my home town.

Are you LTE-Advanced With A 2×10 MHz Carrier Aggregation?

With LTE networks on air these days it seems that those that don't have one yet need to come up with an excuse. Not that they really have to from a technical point of view when they have a well running and optimized HSPA+ Dual Carrier network but still, LTE sounds nicer. So one of my favourite excuse is "we are waiting for LTE-Advanced" without giving more details. But what is it exactly they are waiting for?

LTE-Advanced consists of many features such as LTE CoMP I discussed a couple of days ago. I am pretty sure that's not the one feature they are waiting for to come to the market. Rather, I get the impression that they are waiting for Carrier Aggregation (CA), that allows bundling several carriers in different bands together.

So say, you have 10 MHz in one band and 10 MHz in another band and you want to bundle that together. Is that LTE-Advanced then? Sure it is, from a definition point of view. But is it better than taking a "plain old" 20 MHz channel defined in LTE Release 8 that you don't have to scrap together? And why wait for that in the first place, isn't 10 MHz good enough to start with?

So the point I am trying to make here is to listen with pointed ears when someone uses the term "LTE-Advanced" and actually ask what specifically is meant by that. Combining two 10 MHz channels doesn't count for me (even though it is technically LTE-Advanced). Having said that I can hardly wait for the press to fall into the trap and declare one country more advanced in wireless than another because an "LTE-Advanced" network (with 2x10MHz CA) has been deployed there, while other parts of the world are "lagging" behind (with networks that have 20 MHz LTE Release 8 channels) deployed.

Bah, so much double-talk.

LTE Map and Allocation Calculators

If you are in the "advanced" LTE stage (not to be mixed up with LTE-Advanced) and care about resource blocks, subframes, physical channels, control format indicator, antenna ports, HARQ indicator channels, etc. etc. and how all of that comes together, I've found two interesting links to visualize all that:

The first link is to an LTE Resource Grid calculator. After setting a all input parameters such as the channel size (1.4 to 20 MHz), number of symbols used for the downlink control channel, etc. the resource grid is visualized with the different physical channels marked in different colors. Great stuff, finally an easy way to transform all those formulas in the spec to an easy to understand map and see how changing the input parameters change to channel map. Also, the map is a great way to understand how much of the channel is used for control information and thus overhead, and how much is used for actual user data.

The second link is an LTE Physical Downlink Shared Channel Allocation Calculator. Given the channel bandwidth, control format indicator, modulation type, the number of resource blocks assigned to a device and a couple of other input parameters and the calculator will come up with the number of bits that are transmitted per slot and subframe (1ms) to a device. Again, it's interesting to play around with the input parameters and see how the result changes in real time.

Have fun

CDMA / LTE Dual-Radio with a Single Baseband Chip

LTE has a bit of a problem with voice and a number of different approaches exist to sail around this for the moment. While some network operators might have an inclination towards CS-Fallback (CSFB) to GSM and UMTS, others like Verizon have gone the dual radio approach, i.e. having two radios active at the same time, one for CDMA-1x and one for LTE. An example is the HTC Thunderbolt, that has two radio chips inside. For CDMA it uses a Qualcomm MSM-8655 and for LTE it uses a MDM-9600. For details see here. But it seems to two chip approach might not be necessary for much longer. In this whitepaper, Qualcomm states that "for LTE handsets, the 8960 modem enables […] simultaneous CDMA voice and LTE data (SVLTE [Simultaneous Voice and LTE])". That certainly fixes issue requiring two baseband chips required in a CDMA/LTE smartphone. A potential solution for the GSM/LTE world as well?

German DSL and LTE on a Coverage Map

Here's a link to an interesting map (Breitbandatlas) on the website of the German Department of Commerce on where in the country high speed Internet access is available at speeds of >= 1 Mbit/s. The map is an overlay of fixed line DSL availability with HSPA and LTE coverage. The map is split into tiny cells and for each cell the networks are listed that are available at that location. The result is 99.5% population coverage.

A very good value but it should be noted that for those covered by HSPA and LTE, there's a volume limit per month, typically between 5 and 30 GB depending on the price. That's quite enough for most people and includes occasional Youtube use. Don't forget though to tell your kids about the limit, too 🙂

What’s the Difference Between LTE ICIC and LTE-Advanced eICIC?

Recently, I've been looking into a couple of LTE-Advanced features and was wondering a bit what the difference is between ICIC (Inter-cell Interference Coordination) introduced for LTE in 3GPP Release 8 and eICIC introduced in 3GPP Release 10 as part of LTE-Advanced. Here's my take on it in abbreviated form, for a longer description I found a good resource here.

3GPP Release 8 LTE ICIC: This is an optional method to decrease interference between neighboring macro base stations. This is done by lowering the power of a part of the subchannels in the frequency domain which then can only be received close to the base station. These subchannels do not interfere with the same subchannels used in neighboring cells and thus, data can be sent faster on those subchannels to mobile devices close to the cell.

3GPP Release 10 LTE-Advanced eICIC: This is part of the heterogeneous network (HetNet) approach, where macro cells are complemented with pico cells inside their coverage area (hotspots in shopping centers, at airports, etc.). While the macro cells emit long range high power signals, the pico cells only emit a low power signal over short distances. To mitigate interference between a macro cell and several pico cells in its coverage area, eICIC coordinates the blanking of subframes in the time domain in the macro cell. In other words, there is no interference in those subframes from the macro cell so data transmissions can be much faster. When several pico cells are used in the coverage area of a single macro cell overall system capacity is increased as each pico cells can use the empty subframes without interference from the other pico cells. The downside is of course that the macro cell capacity is diminished as it can't use all subframes. Therefore, methods have to be put in place to quickly increase or decrease the number of subframes that are assigned for exclusive use of in pico areas when traffic patterns change.

In other words, ICIC is a macro cell interference mitigation scheme, while eICIC has been designed as part of HetNet to reduce interference between the macro and pico layer of a network (once pico cells are rolled out to increase coverage and system capacity).

Multi-Core Approaches – Qualcomm vs. Nvidia

I've recently been wondering about the different approaches taken by companies to increase the performance of the CPU part in mobile devices and decided to have a look at some whitepapers. Here's the result that you might find interesting as well:

An in increase in processing power can in a first instance be achieved by increasing the clock rate and make command execution more efficient in general. This is done by using more transistors on the chip to reduce the number of clock cycles required to execute a command and by increasing the on chip memory cache sizes to reduce the occasions the processor has to wait for data to be delivered from external and slow RAM.

Both approaches are made possible by the ever shrinking size of the transistors on the chip. While previous generations of smartphone chips used 90 nanometer structures, current high end smartphones use 45 nanometor technology and the next step to 32 and 28 nanometer structures is already in sight. When transistors get smaller, more can be fitted on the chip and power consumption at high clock rates is lowered. But there's a catch, that I'll talk about below.

Another way of increasing processing power is to have several CPUs and have the operating system assign different tasks that want to be executed simultaneous to different processor cores. When looking at Nvidia's latest Tegra design, it features 4 CPU cores so four tasks can be run in parallel. As often that is not required, the design allows to deactivate and reactivate individual cores at run-time to reduce power consumption when four cores are not necessary, which is probably most of the time. In addition, Nvidia features a 5th core that they call a "companion core" that takes over when only little processing power is needed, for example while the display is off and only low intensity background tasks have to be served. So why is a 5th core required, why can't just one of the four other cores at low clock speed take over the task. Here's were the catch comes into play that I mentioned earlier:

Total chip power consumption is governed by two influences, leakage power and dynamic power. When processors are run at high clock speeds a low voltage is required as the power requirement increases linearly with frequency but in square with the voltage. Unfortunately, optimizing the chip for low voltage operation increases the leakage power, i.e. the power consumption when voltage is applied to a transistor which always requires power to keep it's state. It is this leakage power which becomes the dominant power consumption source when the CPU is idle, i.e. when the  screen switched off, when only background tasks running, etc. And it is at this point where the Tegra's companion CPU comes in. On the die it is manufactured with a different process that is less optimized for high speeds but more optimized for low leakage power. The companion CPU can thus only be run at clock speeds up to 500 MHz but has the low power consumption advantage in idle state. Switching back and forth between the companion CPU and the four standard cores is seamless to the operating system and can be done in around 2 milliseconds.

Qualcomm has used a different approach in their latest Krait architecture to conserve power. Instead of requiring all cores to run at the same clock speed, each core can be run at a different speed depending on how much workload the operating system is requesting to the cores to be worked on. So rather than optimizing one processor for leakage power consumption, their approach to conserve power is to reduce the clock speed of individual processors when less processing power is required.

Which of the two approaches works better in practice is yet to be seen. I wouldn't be surprised though if at some point a combination of both would be used.

Who Is Doing SoCs (System on a Chip) for Smartphones?

When looking back a couple of years it wasn't all that uncommon to find the baseband modem, the application processor and the graphics processor in different chips in a smartphone. Take the Nokia N8 for example. Obviously having three chips is quite inefficient as they take up space on the smartphone board and need to be fine tuned to work with each other. Both disadvantages dissapear, however, when all three components are included in a single chip. So who is doing that today? The following manufacturers come to mind:

  • Qualcomm with their Snapdragon platform. They have everything, the modem (obviously), they have their own application processor design (Scorpion and Krait) based on ARM, and their Adreno GPU (based on assets bought from AMD a couple of years ago).
  • ST-Ericsson: Their NovaThor platform consisting of their Thor modem and Nova CPU+GPU based on an ARM-Cortex design and PowerVR graphics.
  • Nvidia: Originally coming from the graphics domain, they have enriched their portfolio with an all in one SOC with that scales up to quad-core ARM Cortex CPU with an additional CPU for low power / low processing speed operation when the display is switched off and only background tasks being services. They don't have the modem integrated at the moment but with their recent purchase of Icera, that's also only a matter of time.
  • Intel: Not quite on the market yet, but they have their modem through the purchase of Infineon and a (hopefully) low power (enough) CPU with their new Medfield Atom based design and their own graphics processor.

All others like Texas Instruments with their OMAP platform or Samsung with they Exynos are missing the modem, so they are not complete. Combinations are for example a Samung CPU + GPU chip combined with a Qualcomm modem.

Am I missing someone in the CPU+GPU+modem list?

802.11n Wi-Fi Successor: 802.11ac

Over the last couple of years, 802.11n was the Wi-Fi technology everybody was talking about. Now being including in almost any new device there's the inevitable question, what will be next? Sure, 802.11n designs can still be improved on, better antennas, better receivers and so on. But besides a successor is almost ready, 802.11ac. It looks they have run out of single letter characters for the designation but the main technical data is probably worth two characters:

  • 80 MHz and 160 MHz channels (up from 20 MHz in 11g and 40 MHz in 11n when used), 5 GHz operation only.
  • Two 80 MHz channels in different parts of the band can be bundled (to work around other users of the 5 GHz band, e.g. weather radar).
  • 8×8 MIMO (up from 4×4 MIMO in 11n, up from 2×2 used in practice today in the mainstream)
  • Multi User MIMO, so the 8×8 array can be used to send data to four 2×2 devices simultaneously, or three devices, one with 4×4 MIMO and two more with 2×2 MIMO. Other combinations are possible, too.
  • Beamforming.
  • 256QAM modulation (8 bits per transmission step, up from 64QAM in 11g and 11n).
  • Theoretical top speed when everything is combined of 6.93 Gbit/s.
  • Practical speeds perhaps 4-5 times faster than 802.11n today as most features are not mandatory but optional so they will only come over time if at all.

Here's a link with some more details by Electronics News and here, a first demo of 802.11ac single stream with an 80 MHz channel from the Wi-Fi in a Qualcomm smartphone chipset. 230 MBit/s. Not bad for a single stream transmission! And here's a link to another demo with an 802.11ac Access point by Buffalo, 80 MHz channel, 3×3 MIMO, 800 Mbit/s. Again, quite something and that's not even with a 160 MHz channel yet.