CXL (Compute Express Link) is revolutionizing computer architecture by enabling shared memory pools for processors, GPUs, and accelerators. Discover how CXL 2.0 and 3.0 work, what sets CXL apart from PCIe and NVLink, and why it's crucial for data centers, AI, and the future of computing.
The rapid advancement of processors and graphics cards has brought new challenges, especially when it comes to memory bottlenecks. To address these issues, a cutting-edge technology called CXL (Compute Express Link) is gaining traction in 2025. CXL is a high-speed interface for processors and memory that allows resources to be combined into a unified pool, enabling flexible allocation across devices. Built on PCI Express, CXL goes far beyond simple data transfer by providing low-level memory operations, which fundamentally change how memory and processors interact.
Essentially, CXL represents the next leap in computer architecture. While PCIe 6.0 redefined data transfer speeds for PCs, servers, and SSDs, CXL enables processors, GPUs, and accelerators to access shared memory without redundant copying. In this article, we'll explore what CXL is, how versions 2.0 and 3.0 work, what sets it apart from PCIe, and why it's crucial for servers, data centers, and artificial intelligence.
CXL (Compute Express Link) is a revolutionary high-speed interface connecting processors, memory, and accelerators like GPUs or AI chips. Unlike traditional PCI Express-which simply transfers data between devices-CXL allows them to collaboratively access and use the same memory.
In the past, each processor or graphics card had its own dedicated memory. With CXL, devices can share a common memory pool, leading to:
Practically, this solves several key challenges:
Ultimately, CXL is designed to eliminate memory bottlenecks and prepare IT infrastructure for future workloads-from cloud data centers to personal computers.
The CXL (Compute Express Link) standard is evolving rapidly, with several major versions released as of 2025.
Debuting in 2019, the first version offered basic PCI Express compatibility, letting processors directly access memory of devices connected via CXL. However, its functionality was limited.
Launched in 2020, CXL 2.0 introduced a game-changing feature: memory pooling. This allows all devices in a system to dynamically share pooled memory-processors, GPUs, and accelerators alike.
This is especially vital for data centers and cloud services, where workloads constantly shift and memory needs to be allocated on the fly.
Updated in 2022, CXL 3.0 brings:
With CXL 3.0, memory can truly function as a shared resource pool for an entire data center.
In short, CXL 2.0 and 3.0 pave the way for memory to be dynamically shared, moving beyond the traditional model of being tied to a single processor. This flexibility is set to power the next generation of high-performance computing systems.
One of the core advantages of CXL (Compute Express Link) is its transformation of memory management.
Traditionally, each CPU or GPU in a computer or server is assigned its own memory (for example, 64 GB RAM per CPU). The same holds for GPUs and other accelerators, each with its own dedicated VRAM. With CXL, this rigid separation disappears. Devices can now tap into a shared memory pool and use as much as they need, when they need it.
For instance, a server with 1 TB of shared memory can flexibly allocate it between CPUs and GPUs based on current tasks.
This is especially relevant for neural network training and big data analytics, where memory bandwidth often becomes the main performance limiter.
While DDR6 memory is evolving rapidly and differentiating itself from DDR5, CXL goes even further: it introduces a radically new architecture for working with memory, set to redefine the future of computing.
In 2025, the main area where CXL (Compute Express Link) is making its mark is in data centers and AI infrastructure.
Today's data centers often struggle with uneven server memory utilization-some processors have idle RAM, while others run short. CXL enables the creation of a unified memory pool, so resources are dynamically distributed as needed. This boosts hardware efficiency and cuts computation costs.
Training large neural networks requires vast amounts of RAM and VRAM. Traditionally, data must be constantly copied between CPU and GPU, causing delays. With CXL, processors and accelerators can work with shared datasets directly, speeding up model training and making infrastructure more adaptable.
CXL is especially promising in edge computing-distributed computing at the network's edge. Here, rapid and efficient data processing is critical, and shared memory ensures resources are allocated exactly where they're needed.
As a result, CXL is not just a new interface-it's becoming a cornerstone of future cloud and AI technologies.
To understand what makes CXL (Compute Express Link) unique, it helps to compare it to similar technologies.
In short, while PCIe simply "moves data," CXL lets processors and accelerators work on the same data without copying.
We've already covered what PCIe 6.0 is and how it differs from PCIe 5.0-and it's precisely this platform that CXL builds upon.
NVLink is NVIDIA's proprietary technology for high-bandwidth connections between GPUs and CPUs, but it's limited to NVIDIA hardware and mainly targets graphics accelerators. In contrast, CXL is an open standard supported by Intel, AMD, NVIDIA, Microsoft, and others, suitable for CPUs, GPUs, FPGAs, neural chips, and server memory alike.
Although CXL (Compute Express Link) is still in its early adoption phase, it's clear that it will shape the future of computing.
Key areas of development include:
Experts predict that by the end of the decade, CXL will be an essential component of servers and supercomputers, and feature in standard processor specifications.
CXL is more than just a new interface-it marks a fundamental change in computer architecture. By allowing processors, GPUs, and other devices to work with shared memory, CXL eliminates unnecessary data copying and reduces latency. Already adopted in server and data center solutions, CXL is poised to become the standard in all high-performance systems. While PCIe 6.0 delivers blazing data transfer speeds, CXL is redefining how memory and processors operate together.
In the coming years, expect to see CXL revolutionize cloud computing, artificial intelligence, and the future of personal computers.