With the rapid rise of generative AI (GenAI) and the surging demand for large-scale data centers, data transmission performance has become a key factor in enhancing overall system capabilities as the development scale reaches a certain level. The market is gradually moving toward interconnect architectures with lower latency and higher throughput to meet increasingly demanding computing requirements.
Against the backdrop of the high-performance computing era, the Compute Express Link (CXL) protocol standard—an open and standardized interconnect protocol—is designed to enable memory resource sharing and expansion, offering highly flexible resource pooling across heterogeneous computing platforms. This has drawn widespread attention across the industry.
Currently, Nvidia has built a highly vertically integrated and closed ecosystem through its proprietary interconnect technologies, such as NVLink, NVSwitch, NVLink-C2C, InfiniBand, and the Spectrum-X Ethernet platform. These interconnect technologies provide ultra-high bandwidth and ultra-low latency solutions that enable collaborative operations among heterogeneous computing components, including GPUs, CPUs, and DPUs, thereby significantly enhancing overall system performance. They are primarily used in today’s tightly closed AI data centers.
CXL becomes key for easing 4 major bottlenecks of AI computing
Chart 1: Overview of current architecture limitations in memory expansion
CXL holds advantages in virtualization; memory pooling tech boosts demand
CXL's critical growth phase in 2026; hardware support to reach 90% by 2028
Chart 3: Share of hardware supporting CXL applications, 2022-2028

