Electronic
Hardware acceleration in NFV applications to improve media plane processing performance

Hardware acceleration in NFV applications to improve media plane processing performance

NFV, or telecommunications network function virtualization, reduces the cost of expensive network equipment by using general-purpose COTS hardware such as X86 and virtualization technology to carry the software processing of network functions. NFV can decouple software and hardware and abstract functions, so that network device functions no longer depend on dedicated hardware, resources can be fully and flexibly shared, and rapid development and deployment of new services can be achieved. Isolation and self-healing, etc. NFV architecture is the latest communication network architecture defined by ETSI, and the new 5G network will definitely adopt NFV architecture.At the same time, some existing networks have undergone network architecture changes.

NFV, or telecommunications network function virtualization, reduces the cost of expensive network equipment by using general-purpose COTS hardware such as X86 and virtualization technology to carry the software processing of network functions. NFV can decouple software and hardware and abstract functions, so that network device functions no longer depend on dedicated hardware, resources can be fully and flexibly shared, and rapid development and deployment of new services can be achieved. Isolation and self-healing, etc. NFV architecture is the latest communication network architecture defined by ETSI, and the new 5G network will definitely adopt NFV architecture. At the same time, part of the existing network is undergoing network architecture transformation. For example, when the CU plane of the 4G network is deployed separately, the NFV architecture is usually used.

However, the development of technology, like the development of history, always spirals upward. NFV has just developed from professional hardware to general-purpose hardware. When it is oriented to 5G network applications and 4G network CU-plane separation applications, it is found that COTS hardware mainly based on X86 servers cannot meet the network performance requirements of 5G and 4G CU separation. There are two main reasons for this:

First, the general-purpose X86 CPU guarantees generality, but loses speciality, that is, it is not good at specific task processing. For example, parallel processing tasks such as codec conversion, message forwarding, encryption and decryption are processed.

Second, the performance of the X86 general-purpose processor can no longer grow according to Moore’s Law, and the computing performance requirements of telecom service characteristics exceed the growth rate according to “Moore’s Law”.

As far as 5G networks are concerned, in order to meet the large bandwidth and low latency characteristics of 5G networks, 5G RAN and 5G CORE have very large performance improvement requirements, which cannot be satisfied by the performance of X86 processors alone; MECs located in edge DCs are subject to Due to factors such as computer room space, heat dissipation, and cost, it is difficult to use pure X86 processors to meet the requirements of high-performance computing.

In terms of 4G CU separation, the user has high requirements on the forwarding throughput and delay of packets, while general-purpose X86 processors are not professional in packet forwarding.

In the 4G CU separation and 5G network application scenarios, because the general hardware used by NFV has defects in the performance or cost of specific task processing, the X86 processor is equipped with coprocessors (accelerator cards) such as FGPA and GPU. Scenarios reappear in the architecture of NFV. Telecom networks have also experienced a spiral development process from dedicated hardware to general COTS hardware to general COTS hardware + dedicated accelerator card hardware. The latest ETSI NFV architecture also introduces hardware acceleration into the NFV architecture, as shown in the following figure:

Hardware acceleration in NFV applications to improve media plane processing performance

Figure 1 NFV reference architecture

In the new VNF architecture, NFVI is enhanced to increase the virtualization capability of accelerated resources: the accelerators are abstracted, presented in the form of logical acceleration resources, and comprehensive acceleration services are provided uniformly. The virtualization layer provides a unified interface to adapt to different forms of acceleration devices, such as accelerators.

Current Status of Hardware Acceleration Solutions in NFV Applications

The status quo of the hardware acceleration scheme is mainly divided into two parts: the research status of the hardware acceleration of the standards and open source organizations and the application status of the hardware acceleration scheme.

First of all, let’s talk about the research status of hardware acceleration in standards and open source organizations. Here, we introduce the research status of hardware acceleration by two major research (or open source) organizations – ETSI and OpenStack.

ETSI, in addition to defining the function of the hardware acceleration module in the NFV architecture, also defines two implementation schemes for hardware acceleration – the pass-through scheme and the abstract model scheme.

The Pass-through scheme, namely PCI/PCIe-pass-through, directly passes the hardware accelerator card on the PCI slot to a specific virtual machine. Pass-through scheme is currently the most common scheme. The disadvantage is that the hardware is monopolized by the virtual machine, and the upper-layer application or virtual machine needs to maintain the hardware drivers of different accelerator cards by itself.

Abstract model scheme: In NFVI, that is, the hypervisor maintains the “Backend/HW Driver” module; at the VNF layer, maintains the “Generic Driver” module in VNFC. NFVI is responsible for the scanning and driver loading of the accelerator card, hardware virtualization management of the accelerator card, and mounting the virtual accelerator card to the virtual machine. The advantage is that one accelerator card resource can be used by multiple virtual machines, and the acceleration resource can be loaded or released. And the VM only uses a common accelerator card driver for various accelerator cards, and the maintenance of the virtual machine is very simple.

While ETSI defines the hardware acceleration framework and implementation scheme, the open source community OpenStack also launched the Cyborg project, whose main goal is to manage the installation drivers, dependencies, installation and uninstallation of various accelerators. It can connect the accelerator to the virtual machine instance created by nova, and aims to provide a general hardware acceleration management framework. OpenStack is mainly for the driver integration of acceleration hardware in the infrastructure and the perception of acceleration hardware by VIM, and does not involve the upper-layer MANO. Currently the Cyborg project is still a framework and has no valid code yet. ZTE actively participates in the OpenStack Cyborg community, and is mainly committed to developing Cyborg Driver to support high-precision clock synchronization cards for 5G, edge and other high-reliability and low-latency scenarios in the future. At the same time, as the sub-team lead of the Cyborg sub-team, actively participate in the related work of the Cyborg documentation team.

It can be seen from the above-mentioned standards and the introduction of the acceleration scheme by the open source community that the current hardware acceleration scheme, especially the hardware acceleration abstraction scheme, is still immature. At the same time, the use of acceleration hardware involves the cooperation and linkage of acceleration card manufacturers, cloud platform manufacturers, and network element manufacturers. This requires integrating drivers and virtualization at the cloud platform layer for the corresponding accelerator card products, providing corresponding acceleration libraries or SDKs, and calling at the network element layer.

The application status of hardware acceleration solutions in NFV can be summarized as follows:

1. Simple to use. Currently, the main use method is to use the corresponding acceleration hardware through the VM directly. The hardware acceleration card cannot be used elastically or shared among multiple VMs, resulting in unbalanced resource utilization;

2. There are no specifications for general acceleration hardware and interfaces. The acceleration chips of each hardware manufacturer are specific acceleration drivers. The SDKs provided by cloud platform manufacturers are quite different. The network element layer needs to be adapted and customized for development, and the degree of standardization is low;

3. MANO needs to be extended. The use of hardware acceleration can be divided into four stages: perception, allocation, scheduling, and release. Perception requires the cloud platform to identify and allocate hardware types. VNFM and NFVO support network elements to analyze and schedule acceleration hardware resource requests. The cloud platform needs to monitor, deploy, and release acceleration resources. The cloud platform needs to reprogram the acceleration hardware resources. . These all require extensions to MANO.

ZTE NFV Hardware Acceleration Solution

As the world’s leading provider of integrated communication solutions, ZTE is in an industry-leading position in the 5G field. Hardware acceleration is an important solution for improving network performance in 5G networks. ZTE actively participates in the hardware acceleration standards and solutions of ETSI and OpenStack, and has launched its own hardware acceleration solution. The ZTE hardware acceleration solution architecture is shown in the following figure:

Hardware acceleration in NFV applications to improve media plane processing performance

Figure 2 Hardware accelerator card virtualization management and application

ZTE’s hardware acceleration solution has the following advantages:

The compute node is the most important node for the virtualization management of the accelerator card, and it provides the following capabilities:

・Through the FPGA or GPU driver, find the hardware acceleration board, record the local hardware acceleration capability (set), integrate the driver as a general driver, and report the capability to VIM;

・Generate a virtual machine with specific acceleration capabilities, load the corresponding acceleration hardware, and provide a general-purpose virtual machine front-end driver;

The VIM node mainly manages the acceleration capabilities of computing nodes, reports the acceleration capabilities to NFVM/NFVO or other application orchestrators such as HEAT, and completes the deployment of virtual machines with acceleration capabilities.

ZTE’s hardware acceleration solution currently supports FPGA-based GTP service acceleration, GPU-based video and audio service acceleration, and QAT encryption and decryption acceleration. ZTE will work with industry partners to facilitate network transformation with advanced technologies and comprehensively enhance 5G network capabilities.

The Links:   PH50S24-12 NL12876BC26-25 7MBR50SB120-50