The Three Eras Of Telecom Infrastructure Hardware

As you might have noticed on the blog, I’ve been doing a lot of hands-on exploration these days of bare metal hardware, virtual machines, containers and orchestration (e.g. Kubernetes). My motivation behind this is twofold. For one thing, I like to evolve my private cloud. Current status: 25 containers are now running in my private cloud and I have replaced 3 virtual machines. The other reason to learn more about these topics is to better understand in which direction the hardware and underlying software for telecommunication infrastructure evolve. This made me realize even more clearly that there are at least three distinct eras of telecom infrastructure hardware and I’m very happy we are now entering the third era as I will explain below.

The Early Days – Proprietary Hardware and Software

Let’s go back 20 years to the early 2000s. At that time, I was working for Nortel, a telecom supplier. The wireless world was still quite voice-centric and in the core network, the GSM Mobile Switching Center (MSC) was the center of the wireless product portfolio. Like with any manufacturer, the MSC was an adaptation of a fixed-line switching center, it was built with proprietary technology, and had little to do with computing equipment used in IT. Everything was proprietary from cabinets, cabinet sizes, card dimensions, to backplane technology and operating systems. In the case of Nortel’s DMS-100 switch, they even used a little-known processor, the Motorola 88100. At the time, a single CPU pair was the center of the complete system that served tens of thousands of subscribers. There were lots of other cards in the system with their own smaller processors to deal with things like input/output to management terminals, etc. A very distributed architecture. The only thing already standardized at this point were the protocols to communicate with the outside world like the radio access network, the subscriber database (HLR), or other MSCs. These parts of the network were proprietary as well but could be from other manufacturers as the protocols were standardized.

The Middle Ages – ATCA

And then, things started to change. By the early 2000s, it became clear that the off-the-shelf industry could offer components for telecom equipment that were much cheaper than proprietary in-house developments. This gave rise to the ‘Advanced Telecommunications Computing Architecture‘ initiative, which is known as ATCA in the industry. ATCA basically defines the dimensions of shelves and cards, and specifies the backplane connector and communication protocols used between the cards. This probably helped to bring hardware costs down, as cheaper off-the-shelf components could then be used, and a standardized floor layout for the equipment simplified the management of data centers. However, the architecture of ATCA equipment still resembled the setup of the proprietary days. An ATCA shelf typically had CPU cards, storage cards, interface cards, etc., and they were, of course, all proprietary. From the cabinet and backplane connectivity point of view, cards from one manufacturer could theoretically be put into a cabinet of another manufacturer. However, this never happened in practice to my knowledge, as the hardware on the board was still proprietary, and was hence only supported by the software of a particular equipment vendor (such as Ericsson, Nortel, Huawei, etc.).

In 2021, ATCA systems are (still) used in many telecom networks all over the world. However, this is changing quite quickly now.

Today – COTS

One important thing that has changed in the last decade in telecommunication networks was that all transport interfaces have converged on Ethernet over copper or fiber cables. Therefore, there is no longer the need for dedicated telecommunication hardware that supports, from an IT point of view, ‘strange’ network transmission protocols and physical layers. This in turn paved the way to use the same standard off-the-shelf hardware that was already widely used in data centers. On the control plane of a mobile network that deals with things such as authentication, mobility, and session management, it became straightforward to put such services in virtual machines and run them on standard x86 server blades. It’s mainly a compute problem that doesn’t require specialized hardware. In industry terms, this is referred to as Network Function Virtualization (NFV), and ETSI standardized the way this should be done. This drew quite a number of non-incumbent players into the mobile core network market and today, there is quite some competition there. One could also say that telecom network equipment has moved in the direction the IT industry moved a decade earlier: Standardized x86 based servers that run virtual machines in huge data centers. And there’s another big advantage of using COTS x86 based hardware: No longer is it necessary to use proprietary operating system software for telecom equipment that was specifically adapted to the hardware. Instead, Linux and open source hypervisors are the basis of everything.

Mobile networks are not only about the signaling on the control plane, however, but also about forwarding IP packets of the subscribers to and from the Internet. This is referred to as the ‘user plane’. User plane processing was much more difficult to virtualize, i.e. to be put into virtual machines, as packet inspection was a task mostly done in hardware for performance reasons. But this has changed now as well. In papers from 2017, Intel and Ericsson proclaim this to be fixed, and inspecting, changing, and forwarding IP packets can now be done cost-efficiently in virtual machines as well. As the papers are already a few years old, this has probably arrived in practice by now. When you search for ‘virtual Evolved Packet Core’ (vEPC) on the net, you will find many examples.

In the meantime, the IT industry has already moved on, and virtual machines are no longer the only way to process data in the cloud. Instead, containers and microservices are now preferred over virtual machines in many areas. 3GPP has thus decided to base the 5G core on containerized microservices rather than network functions in virtual machines. Have a look at this post for details. Here, specialized telecom hardware is even less required.

What about the Radio Access Network?

So the last domain in telecom networks where dedicated hardware is still used is the RAN. This is because even in LTE and 5G NR networks, the radio part of the base station can’t be done with COTS hardware. But this is changing as well now with Open RAN, which aims, among other things, to separate hardware from software and put as much of the workload as possible on standard x86 hardware. Have a look at this post for details.

And I’m Happy about COTS

The great thing about telecom equipment makers using COTS hardware and virtualizing and containerizing services is that open source software is the basis for everything. It has thus become possible to experiment with this very foundation at home on a small scale. It’s even possible to run your own LTE core network at home, and a RAN complete with UE simulation or a real RF board as well if you like. Virtual machines and containers? No problem, everything is open source and runs on a small notebook just the same way as on mighty servers in the cloud. That opens up incredible opportunities to better understand how things work and fit together not only by reading about it but by actually experimenting and using the technology at home. A very powerful development for those who want to push the boundaries. No way I could have imagined this 20 years ago when I sat in front of a workstation at work, typing in commands for a DMS-100 voice switch that went to an esoteric Motorola 88100 CPU that was hidden somewhere inside one of the dozens of cabinets that comprised a switch.

One thought on “The Three Eras Of Telecom Infrastructure Hardware”

  1. You’ve hit the nail on the head!
    This, together with network slicing, will bring massive change to the way telecoms networks are deployed and operated. The network (or at least your own slice of the network) could even end up running on the internet router in your home.
    Things are getting very interesting…

Comments are closed.