A NFV (Network Function Virtualization) and SDN Primer

I'm getting quite a number of questions lately on what Network Function Virtualization (NFV) and Software-Defined Networking (SDN) are all about. Whenever I try to give an elevator pitch style answer, however, I can often see that lots of question marks remain. I guess the problem is that people who haven't seen virtualization in practice so far have difficulties imagining the concept. So I've decided to explain NFV and SDN from a different angle, starting with virtualization on the desktop or notebook PC, i.e. at home, which is what most people can try out themselves, imagine or at least relate to. Once the concept of virtualization at home becomes clear, the next step is to have a look why and how virtualization is used in data centers in the cloud today. From here it's only a tiny step to NFV. And finally, I'll explain how SDN fits into the picture. This is probably one of the longest posts I ever had, so bring some time. As part of the exercise, I'll cover things like Software-Defined Networks (SDNs), Network Function Virtualization (NFV), OpenStack, OpenFlow, the OpenNetwork Foundation, etc. (Note: Should you have noticed that the order of the terms in the last sentence does not make sense at all, you probably know already everything that I'm going to talk about in this post).

Virtualization At Home

So how do we get from a desktop PC and what can be done with it today to NFV? Desktop and notebook PC hardware has become incredibly powerful over the years, memory capacity exceeds what most people need for most tasks and hard drive capacity is just as abundant. So for most of the time, the processor and memory is not very much utilized at all. The same has been happening on the server side with lots of CPU cycles and memory wasted if a physical server is only used for a single purpose, e.g. a file server.

To make better use of the hardware and to enable the user to do quite a number of new and amazing things with his PC, the industry has come up with virtualization, which means creating a virtual environment on a PC that looks like a real PC for any kind of software that runs in that virtual environment. This simulation is so complete that you can run a full operating system in it and it will never notice the difference between real hardware and simulated (virtual) hardware. The program that does that is called a hypervisor. Hypervisor sounds a bit intimidating but the basic idea behind it is very simple: Just think of it as a piece of software that simulates all components of a PC and that denies any program running in the virtual machine direct access to real physical hardware. In practice this works by the hypervisor granting direct unrestricted access to only a subset of CPU machine instructions. Whenever the program running in the virtual environment using the real CPU wants exchange data with a physical piece of hardware with a corresponding CPU input/output machine instruction, the CPU interrupts the program and calls the hypervisor program to handle the situation. Let's make a practical but somewhat abstracted example: If the machine instruction is about transferring a block of data out of memory to a physical device such as for example a hard drive, the hypervisor takes that block of data and instead of writing it to the physical hard drive it writes the block of data into a hard drive image file residing on a physical disk. It then returns control to the program running in the virtual environment. The program running in the virtual environment never knows that the block of data was not written to a real hard drive and happily continues its work. Interacting with any other kind of physical hardware such as the graphics card, devices connected to a USB port, input from the keyboard, mouse, etc. works in exactly the same way, the CPU interrupts program execution whenever a machine instruction is called that tries to read or write to or from a physical resource.

Running an Operating System in a Virtual Machine

There are several things that make such a virtual environment tremendously useful on a PC. First, a complete operating system can be run in the virtual environment without any modifications in addition to the normal operating system that runs directly on the real hardware. Let's say you have Windows 7 or 8 running on your PC, and every now and then you'd like to test new software but you don't want to do it on your 'real' operating system because you're not sure whether it's save or whether you want to keep it installed after trying it out. In such a scenario a virtual machine in which another Windows 7 or 8 operating system (the so called guest operating system) is executed makes a lot of sense as you can try out things there without making any modifications to your 'real' operating system (called the host operating system). Another example would be if you ran a Linux distribution such as Ubuntu as your main operating system (i.e. the host operating system) but every now and then you need to use a Windows program that is not available on that platform. There are ways to run Windows programs on Linux but running them in a virtual machine in which Windows 7 or 8 is installed is the better approach in many cases. In both scenarios the guest operating system runs in a window on the host operating system. The guest operating system knows nothing about it's screen going to a window, as from it's point of view it puts it's graphical user interface via the (simulated) graphics card to a real screen. It just can't tell the difference. Also, when the guest operating system writes something to its (simulated) hard disk, the hypervisor translates that request and writes the content into a large file on the physical hard drive. Again, the guest operating system has no idea that this is going on.

Running Several Virtual Machines Simultaneously

The second interesting feature of virtual machines is that the hypervisor can execute several of them simultaneously on a single physical computer. Here's a practical example: I'm running Ubuntu Linux as my main (host) operating system on my notebook and do most of my every day tasks with it. But every now and then I also want to try out new things that might have an impact on the system, so I run a second Ubuntu Linux in a virtual machine as sort of a playground. In addition I often have another virtual machine instance running Windows 7 simultaneously in addition to the virtual machine running Ubuntu. As a consequence I'm running three operating systems at the same time: The host system (Ubuntu), one Ubuntu in a virtual machine and a Window 7 in another virtual machine. Obviously a lot of RAM and hard drive storage is required for this and when the host and both virtual machines work on computational intensive tasks they have to share the resources of the physical CPU. But that's an exception as most of the time I'm only working on something on the host operating system or on something in one of the virtual machines and only seldom have have something computational intensive running on several of them simultaneously. And when I don't need a virtual machine for a while I just minimize it's window rather than shutting down the client operating system. After all, the operating system in the virtual machine takes little to no resources while it is idle. Again note that the guest OS doesn't know that it's running in a window or that the window has been minimized.

Snapshots

And yet another advantage of virtual machines is that the virtual machine manager software can create snapshots of a virtual machine. In the easiest scenario a snapshot is made while the virtual machine instance is not running. Creating a snapshot is then as simple as freezing the file that contains the hard disk image for that virtual machine and to create a new file to which all changes that are happening from now on are recorded to. Returning to the state of the virtual machine when the snapshot was taken is as simple as throwing away the file into which the changes were written. It's even possible to create a snapshot of a virtual machine while it is running. In addition to creating an additional file to write changes to that are made in the future to the hard drive image, the current state of all simulated devices including the CPU and all its registers are made and a copy of the RAM is saved on disk. Once this is done, the operating system in the virtual machine continues to run as if nothing had happened at all. From it's point of view nothing has actually happened at all because the snapshot was made by the hypervisor from outside the virtual environment so it has no way of knowing that a snapshot was even made. Going back to the state captured in the snapshot later on is then done by throwing away all changes that have been made to the hard drive image, to load the RAM content from the snapshot file and to reload the state of all simulated hardware components to the state they were in when the snapshot was made. Once that is done all programs that were running at the time the snapshot was made resume running at exactly the machine instruction they were about to execute. In other words, all windows on the screen are in the position again as they were when the snapshot was made, the mouse pointer is back in it's original place, music starts playing again at the point the snapshot was made, etc. etc. From the guest operating system's point of view it's as if nothing has happened, it doesn't even know that it has just been started from a snapshot. The only thing that might seem funny to it is that when it requests the time from a network based time server, there is a large gap between the time the network time server reports and the system clock, because when the snapshot is restored, the system clock still contains the value of the time the snapshot was taken.

Cloning a Virtual Machine

And the final cool feature of running an operating system in a virtual machine is that it's very easy to clone it, i.e. to make an exact copy. This is done by copying the hard disk image and tie it to a new virtual machine container. The file that contains the hard disk contents together with a description of the properties of the virtual machine such as what kind of hardware that was simulated can also be copied to a different computer and used with a hypervisor software there. If the other computer uses the same type of processor, the operating system running in the virtual machine will never notice the difference. Only if a different CPU is used (e.g. a faster CPU with more capabilities) can the guest operating system actually notice that something has changed. This is because the hypervisor does not simulate the CPU but grants the guest operating access to the physical CPU until the point where the guest wants to execute a machine instruction that communicates with the outside world as describe above. From the guest operating system's point of view, this looks like the CPU was changed on the motherboard.

By now you've probably gotten the idea why virtual machines are so powerful. Getting started with virtual machines on your desktop PC or your notebook is very easy. Just download an open source hypervisor such as VirtualBox and try it out yourself. Give the virtual machine one gigabyte of memory and connect an Ubuntu installation CD image to the virtual CD drive.

Virtualization in Data Centers in the Cloud

Before discussing NFV and SDN there's one more thing to look at first and that is virtualization in cloud computing. One aspect of cloud computing are large server farms operated by companies such as Amazon, Rackspace, Microsoft, etc.,  that offer virtualized servers for other companies and private individuals for use instead of equipment physically located on a company's premises or at a user's home. Such servers are immensely popular for running anything from simple web sites to large scale video streaming portals. This is because companies and individuals using such cloud based servers get a fat pipe to the Internet they might not have from where they are located and all the processing power, memory and storage space they need and can afford without buying any equipment. Leaving privacy and security issues out of the discussion at this point, using and operating such a server is no different from interacting with a local physical server. Most severs are not administrated via a graphical user interface but via a command line console such as ssh (secure shell). So it doesn't matter if a system administrator connects to a local Ubuntu server over the local network or an Ubuntu server running in the cloud over the Internet, it looks and feels the same. Most of these cloud based servers are not running directly on hardware but in a virtual machine. This is because even more so than on the desktop, server optimized processors and motherboards have become so powerful that they can run many virtual machines simultaneously. Modern x86 server CPUs have 8 to 16 cores and have direct access to dozens to hundreds of gigabytes of main memory. So it's not uncommon to see such servers running ten or more virtual machines simultaneously. Like on the desktop, many applications only require processing power very infrequently so if many of such virtual servers are put on the same physical machine, CPU capacity can be used very efficiently as the CPUs are never idle but are always put into good use by some of the virtual machines at any point in time.

Virtual machines can also moved between different physical servers while they are running. This is convenient, for example, in cases when a  physical server becomes overloaded due several virtual machines suddenly increasing their workload. When that happens, less CPU capacity is available per virtual machine that may have been guaranteed by the cloud provider. Moving a running virtual machine from one physical hardware to another while it is running is done by copying the contents of the RAM currently used by the virtual machine on one physical server to a virtual machine instance another. As the virtual machine is still running while it's RAM is being copied, some parts of the RAM that was already copied will be changed so the hypervisor has to keep track of this and re-copy those areas. At some the virtual machine is stopped and the remaining RAM that is still different is copied over to virtual machine on the target server. Once that is done, the state of the virtual machine such as CPU registers and the state of the simulated hardware is also copied. Once this has been done there is an exact copy of the virtual machine on the target server and the hypervisor then lets the operating system in the cloned virtual machine continue to work from exactly the point where it has been stopped on the original server. Obviously it is important to keep that cut-over time as short as possible and in practice, values in the order of a fraction of a second can be reached. Moving virtual machines from one physical server to another can also be used in other load balancing scenarios and also for moving all virtual machines running on a physical server to another so that the machine can be powered down for maintenance or replacement. 

Managing Virtual Machines In the Cloud

Another aspect I'd quickly like to address is how to manage virtual resources. On a desktop or notebook PC, hypervisors such as Virtualbox bring their own administration interface to start, stop, create and configure virtual machines. A somewhat different approach is required when using virtual resources in a remote data center. Amazon Web Services, Google, Microsoft, Rackspace and many others offer a web based administration of the virtual machines they offer, and getting up and running is as simple as as registering for an account and selecting a pre-configured virtual machine image with a base operating system (such as Ubuntu Linux, Windows, etc.) with a certain amount of RAM and storage. Once done, a single click launches the instance and the server is ready for the administrator to install the software he would like to use. While Amazon and others use a proprietary web interface, others such as Rackspace use OpenStack, an open source alternative. OpenStack is also ideal for companies to manage virtual resources in their own physical data centers.

Network Function Virtualization

And now let's finally come to Network Function Virtualization (NFV) and jump straight to a practical example. Voice over LTE (VoLTE) requires a number of logical network elements called Call Session Control Functions (CSCF) that are part of the IP Multimedia Subsystem (IMS). These network functions are usually shipped together with server hardware from network manufacturers. In other words, these network functions run on a server that is supplied by the same manufacturer. In this example, the CSCFs are just a piece of software and from a technical point of view there is no need to run them on a specialized server. The idea of NFV is to separate the software from the hardware and to put the CSCF software into virtual machines. As explained above, there are a number of advantages to this. In this scenario the separation means that network operators do not necessarily have to buy the software and the hardware from the same network infrastructure provider. Instead, the CSCF software is bought from a specialized vendor while off-the-shelf server hardware might be bought from another company. The advantage is that off-the-shelf server hardware is mass produced and there is stiff competition in that space from several vendors such as HP, IBM and others. In other words, the hardware is much cheaper. As the CSCF software is running in a virtual machine environment, the manufacturer of the hardware doesn't matter as long as the hypervisor can efficiently map simulated devices from the virtual machine to physical hardware of the server. Needless to say that this is one of the most important goals of companies working on hypervisors such as vSphere or KVM and companies working on server hardware. Once you have the CSCF network function running in virtual machines running on hardware of your choice you can do a lot of things that weren't possible before. Like described above, it becomes very easy, for example, to add additional capacity by installing off-the shelf server hardware and starting additional CSCF instances as required. Load sharing also becomes much easier because the physical server is not limited to only running virtual machines with a CSCF network function inside. As virtual machines are totally independent from each other, any kind of other operating system and software can be run in other virtual machines running on the same physical server and can be moved from one physical server to another while they are running when a physical server reaches its processing capacity at some point. Running different kinds of network functions in virtual machines on standard server hardware also means that there is less specialized hardware for the network operator to maintain and I suspect that this is one of the major goals that they want to achieve, i.e. to get rid of custom boxes and vendor lock-in.

Another network function that lends itself to run in a virtual machine is the LTE Mobility Management Entity (MME). This network function communicates directly with mobile devices via an LTE base station and fulfills tasks like authenticating a user and his device when he switches it on, it instructs other network equipment to set up a data tunnel for user data traffic to the LTE base station a device is currently located at, it instructs routers to modify the tunnel endpoint when a user moves to another LTE base station and generally keeps track of the device's whereabouts so it can page the device for incoming voice calls, etc. All of these management actions are performed over IP so from an architecture point of view, no special hardware is required to run MME software. It is also very important to realize that the MME only manages the mobility of the user and when the location of the user changes it sends an instruction to a router in the network to change the path of the user data packets. All data exchanged between the user and a node on the Internet completely bypasses the MME. And to put it again in other words, the MME network function is itself the origin and sink of what are called signaling messages that are encapsulated in IP packets. Such a network function is easy to virtualize because the MME doesn't care what kind of hardware is used to send and receive it's signaling messages. All it does is to put them into IP packets and send them on their way and it's knowledge and insight into how these IP packets are sent and received is exactly zero. What could be done, therefore, is to put a number of virtual machines running an MME instance each and a couple of other virtual machines running a CSCF instance on the same physical server. Mobile networks usually have many instances of MMEs and CSCFs and as network operators add more subscribers, the amount of mobility management signaling increases as well as the amount of signaling traffic via CSCF functions required for establishing VoLTE calls. If both network functions run on the same standard physical hardware, network operators can first fully load one physical server before spinning up another which is quite unlike the situation today where the MME runs on dedicated and non-standardized hardware and a CSCF runs on another expensive non-standardized server and both are only running at a fraction of their total capacity. Reality is of course more complex than this due to logical and physical redundancy concepts to make sure there are as few outages as possible. This increases the number of CSCF and MME instances running simultaneously. But the concept of mixing and matching virtualized network functions on the same hardware scales and can also be used for much more complex scenarios perhaps with even more benefits compared to the simple scenario that I just described.

Virtualizing Routers

In addition to network functions that are purely concerned with signaling such as MMEs and CSCFs, networks contain lots of physical routers that look at incoming IP packets and make decisions over which interface they have to be forwarded and if they should be modified before being sent out again. A practical example from the mobile world are the LTE Serving Gateway (SGW) and the Packet Data Network Gateway (PDN-GW), which are instructed by the MME to establish, maintain and modify tunnels between a moving subscriber and the Internet to hide the user's mobility from the Internet. To make routers as fast as possible, parts of the decision making process is not implemented in software but as part of dedicated hardware (ASICs). Thus, virtualizing routing equipment is very tricky because routing can no longer be preformed in hardware but has to done in software running in a virtual machine. That means that apart from making the routing decision process as efficient as possible, it is also important that forwarding IP packets from a physical network interface into a virtual machine and then sending them out again altered or unaltered over another virtual network interface to another physical network interface must incur as little overhead as possible. Intel seems to have spent a lot of effort in this area to close the gap as much as possible with it's Data Plane Development Kit (DPDK) and Single-Root IO-Virtualization (SR-IOV).

Software-Defined Networking

And now let's turn to Software-Defined Networking (SDN), a term that is often used in combination with Network Function Virtualization. SDN is something entirely different, however, so let's forget about all the virtualization aspects discussed so far for a moment. Getting IP packets from one side of the Internet to the other requires routers. Each router between the origin and destination of an IP packet looks at the packet header and makes a decision to which outgoing network interface to forward it to. That starts in the DSL/Wi-Fi/router box that looks at each IP packet that is sent to it from a computer in the home network and decides whether it should forward it over the DSL link to the network or not. Routers in the wide area network usually have more than one network interface so here the routing decision, i.e. to which network port to forward a packet is more complex. This is done with routing tables that contain IP address ranges and corresponding outgoing network interfaces. Routing tables are not static but but change dynamically, e.g. when network interfaces suddenly become unavailable, e.g. due to a fault or because new routes to a destination become available. Even more often, routing tables change because subnets on other parts of the Internet are added and deleted all the time. There are a number of network protocols such as BGP (Border Gateway Protocol) that are used by routers to exchange information about which networks it can reach. This information is then used on each router to decide if an update to the routing table is necessary. When the routing table is altered due to a BGP update from another router, the router will then also send out information to its downstream routers to inform them of the change. In other words, routing changes propagate through the Internet and each router is responsible on its own for maintaining the routing table based on routing signaling messages it receives from other routers. For network administrators this means that they have to keep a very close look to what each router in their network is doing, as each updates it's routing table autonomously based on the information it receives from other routers. Routers from different manufacturers have different administration interfaces and different ways to handle routing updates which adds additional complexity for network administrators. To make the administration process simpler and more deterministic, the idea behind Software-Defined Networking (SDN) is remove the proprietary administration interface and automated local modifications of the routing table in the routers and perform these tasks in a single software on a centralized network configuration platform. Routers would only forward packets according to the rules they receive from the centralized configuration platform and according to the routing table which it has also received from the centralized platform. Changes to the routing table are made in a central place as well instead of in a decentralized manner in each router. The interface SDN uses for this purpose is described in the OpenFlow specification which is standardized by the Open Network Foundation (ONF). A standardized interface enables network administrators to use any kind of centralized configuration and management software independent from the manufacturers of the routing equipment they use in their network. Router manufacturers can thus concentrate designing efficient router hardware and the software required for inspecting, modifying and forwarding packets.

Summary

This essay has become a lot longer than I originally intended but I wanted to make sure it becomes clear that the concept of NFV does not come out of thin air but is actually based on the ideas that have radically changed other areas of computing in the past decade. SDN, in contrast, is something radically new and addresses the shortcomings of a decentralized and proprietary router software and control approach that get worse as networks become more and more complicated. On the one hand, implementing NFV and SDN is not going to be an easy task because these concepts fundamentally change how the core of the Internet works today. On the other hand, the expected benefits in terms of new possibilities, flexibility, easier administration of the network and cost savings are strong motivators for network operators to push their suppliers to offer NFC and SDN compatible products.

Useful Links

There's tons of interesting material available on the Internet around NFC and SDN but I've decided to only include three interesting links here that tell the story from were I leave off:

 

2 thoughts on “A NFV (Network Function Virtualization) and SDN Primer”

  1. Good overview, but your description of SDN might leave those readers wondering what is the gist of the matter.

    After all, NMS have been in place for decades to “make the administration process simpler and more deterministic” by “removing the proprietary administration interface … in the routers and performing these tasks in a single software on a centralized network configuration platform”.

    The important thing is that with SDN one is not focused on faults, performance or audit logs. Rather, one should ultimately be able to change the routing and networking logic atop a commodity IP packet transmission box with network interfaces in the same way that one can change the OS atop a commodity computer with CPU and storage peripherals. At least this is how I understand the issue.

Comments are closed.