I was in New York for a couple of days recently and as my N95-8GB (yes, I am a traditionalist) can only do UMTS 2100 MHz my voice calls and small screen Internet connectivity had to be content with the GSM network layer. But there are interesting things to discover here as well.
It looks like AT&T consistently uses AMR Half-Rate in the center of the city to double their voice capacity. All calls I established were always set-up with this codec. Also, AT&T uses both the 850 and 1900 MHz bands and if the mobile detects both, the network always handed the voice call over to a 1900 MHz carrier, even when the signal in the 850 MHz band was 30 db stronger. That could have many reasons, maybe they use the 1900 MHz band as a capacity layer and reserve the 850 MHz carriers for difficult terrain and indoor connections.
Speaking of difficult terrain: AT&T when you have a minute, have a radio team look at your coverage in Penn Station. Even when standing still, my calls in the underground station frequently dropped. Despite a high signal level, audio quality was horrible, which points to interference, and every couple of seconds, the call was handed over between the 850 and 1900 MHz layer and the traffic channel bounced between AMR Full-Rate and Half-Rate. Quite a frustrating experience.
4 thoughts on “AMR Half Rate in New York”
There is an error in your post. The 850 MHz signal was probably 30 dB stronger. 30 dBm would mean that it was 1 watt stronger and you should be just under the antenna.
In any case, 30 dB is a huge difference, not justified by being a higher frequency, so most likely the deployment of the 1900 band layer is less dense than the 850’s.
This experience is not unusual. At home, my phone bounces between 850 and 1900 even in idle mode, both locked in GSM mode or in dual-mode. I believe 1900 is indeed used as the capacity layer, but the system parameters seem over-biased toward 1900. And AT&T’s GSM voice quality is always mediocre at best.
As far as I know, all sites in areas where AT&T owns both 850 and 1900 spectrum are dual-band, so there shouldn’t be any area where the best server at 1900 is 30 dB weaker than the best server at 850. My guess is that, due to some extremely long neighbor lists, phones don’t necessarily flip between 1900 and 850 on the same site, but wind up bouncing between sites as well as bands. This might result in a perceived 30 dB difference between bands, e.g., if the phone jumps from 850 on site A to 1900 on site B.
yes, its 30db stronger not 30 dbm stronger. A subtle bit important difference! Duly corrected, thanks!
Been a while since I looked at 2G, but last time I looked hierarchical cell selection was usually used to push traffic down on too higher frequencies – so setup on macro layer using say 900 Mhz, before pushing down to 1800 on micro / SLM’s / etc.
Usually a capacity setup, and also to push localised (and slow moving traffic) on to higher frequencies for coverage / handover reason’s (especially in dense urban setups).
Comments are closed.