MWC 2023: Nokia Goes All-In with In-Line Acceleration for Cloud RAN
The News: Nokia provided an update to its Cloud RAN portfolio development in the lead up to MWC 2023. Nokia identifies hardware acceleration as a critical element in Cloud RAN performance with the overall benefits of In-Line acceleration outweighing those of Look-Aside acceleration for any Cloud RAN deployment aiming for highest performance, lowest Total Cost of Ownership (TCO), and greatest flexibility. Read the Nokia blog here.
MWC 2023: Nokia Goes All-In with In-Line Acceleration for Cloud RAN
Analyst Take: Nokia is again declaring its full endorsement of the “In-Line” architecture when it comes to Cloud RAN. Cloud RAN uses the vertical disaggregation of the Radio Access Networks (RAN) baseband software from baseband processing hardware. To do so, the virtual Distributed Unit (vDU) and Centralized Unit (vCU) baseband functions run as containers on a Container-as-a-Service (CaaS) software layer. The functions and software run on Commercial-Off-the-Shelf (COTS) server hardware. At the heart of the servers are General-Purpose Processors (GPP) and hardware accelerators.
The two primary options for implementing Cloud RAN acceleration are the so-called “Look-Aside” and “In-Line” architectures. To review, the Look-Aside architecture option, the general-purpose computing central processing unit (CPU) acts as the master for processing L1 with key functions, such as Forward Error Correction (FEC), sent back-and-forth to the hardware accelerator.
The hardware accelerator can be a separate Peripheral Component Interconnect Express (PCIe) card in the server or be located on the same die alongside the CPU. In both Look-Aside acceleration cases, the CPU still processes many of the L1 real-time computations, for which it can prove inefficient, as well as the L2/L3 processing for which the GPP is better suited.
Through the In-Line architecture option, all or part of the L1 processing is off-loaded from the CPU with a RAN SmartNIC PCIe card. SmartNICs are commonly used as accelerators in public and private cloud data centers. In-Line acceleration SmartNICs use dedicated and optimized technology for L1 processing and fully relieve the general-purpose CPUs (GPPs) from the highly intense L1 processing demands. This liberates valuable CPU resources enabling higher performance for L2 and L3 applications processing. Through an In-Line SmartNIC, less complex and less expensive on-accelerated CPUs can be used for L2 and L3 processing where they are better suited.
I view the In-Line architecture as delivering competitive advantages in relation to Look-Aside solutions across efficiency, capacity, and connectivity considerations, while also improving power consumption by combining high-performance SmartNIC capabilities with COTS server hardware. For instance, the processing power and capabilities of both the In-Line architecture choices will improve through silicon design advances provided by chipset suppliers such as Marvell, Qualcomm, AMD/Xilinx, and NVIDIA, as well as the Look-Aside architecture supplied by Intel.
Of note, Nokia uses customized Marvell OCTEON silicon to augment its ReefShark chipset family across key applications such as multi-RAT RAN and transport. Looking ahead, I see L1 processing demands increasing substantially due to higher radio capacity needs and lower latency demands in the concatenation from 5G to 5G-Advanced and eventually 6G throughout vDU/vCU, multi-sector macrocell base stations, microcell base stations, and intelligent radio head environments.
Through In-Line Acceleration, L1 capacity can be increased by adding, through a targeted and cost-conscious manner, SmartNICs independently from the CPUs that are processing L2/L3. And contrariwise, CPU capacity can be increased independently from In-Line SmartNIC capacity. With Look-Aside acceleration, all capacity enhancements require increasing the number of CPUs even if not all the CPU functions are unvaryingly needed for the processing of the different software layers (L1, L2, L3). In-Line acceleration can streamline the full architecture with a clean L1-L2 interface.
The O-RAN Alliance seeks to open the interface between L1 and L2. In-Line SmartNICs use the standard PCIe interface and integrate with all compute and cloud platforms, with the optionality of x86 or Arm processors for disaggregated L2 and L3 processing. I see this delivering greater solution design flexibility and a broader choice of cloud server hardware providers.
I also see emerging Cloud RAN (vDU+vCU) configurations as delivering notably lower power consumption per cell with an In-Line approach compared to Look-Aside, yielding lower cost per cell for the In-Line solution.
Additionally, In-Line SmartNICs use cloud native VDU and vCU software that are purpose developed to align with cloud native principles, including data being decoupled from the applications, loosely coupled microservices, elastic and horizontal scaling, automated lifecycle management, and continuous software integration and delivery. For example, it supports the scalability of containerized vDU application functions to various nodes managed by the container orchestration platform, such as Kubernetes. As a result, cloud efficiency is higher with the In-Line accelerated Cloud RAN architecture as it relaxes the latency requirements on the CaaS layer running on the CPU, leading to savings on the real-time features of CaaS.
Key Takeaways: Nokia Fully Commits to Cloud RAN In-Line Acceleration
I believe Nokia’s adoption of the In-Line Acceleration approach can help accelerate and broaden market acceptance of Cloud RAN, including across multi-sector macrocell base stations, microcell base stations, intelligent radio heads, and O-RAN vDUs implementations. It can also help advance 5G ecosystem outcomes such as Vodafone/Nokia collaboration to spread Open RAN across Europe and O-RAN Alliance support. This includes providing lower latency, higher system capacity, and high per-user data rates which power the densification of the RAN and spur deployment of additional network nodes, further propagating Cloud RAN adoption.
From my view, 5G RANs will increasingly use wider bandwidths, triggering torrential demand for high throughput microcells and full macro cell capabilities. As such, the In-Line acceleration Nokia advocates will become increasingly essential to delivering optimal 5G capacity, energy efficiency, cloud nativeness, and time-to-market advantages across O-RAN and vRAN implementations.
Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.
Other insights from Futurum Research:
Nokia Fiscal Q4 2022 & FY 2022: Demonstrates Impressive Company-wide Progress with Turnaround Mission
A Deep Dive into Nokia’s Evolution, Growth, and Brand Strategy
Marvell Fiscal Q3 2023: Record Quarter Revenues and New Cloud Products Pave Way for Long-Term Growth
Image Credit: dplNews
Ron is an experienced research expert and analyst, with over 20 years of experience in the digital and IT transformation markets. He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including software and services, infrastructure, 5G/IoT, AI/analytics, security, cloud computing, revenue management, and regulatory issues. Read Full Bio.