Skip to content

The composable future of data centers

Composability is the latest hot word for data centers. It makes data center resources as accessible as cloud services. For enterprises, this composability offers the cheapest and most efficient means by which to provide an experience similar to on-premises, but in the cloud.

Liqid is a leader in data center composability. Samsung is a leader in memory technology. Tanzanite Silicon Solutions is a leader in memory pooling tech. All three companies joined forces to show off composable memory at Dell Technologies World 2022 in Las Vegas this week.

Delivering high-speed CPU-to-memory connections for the first time, the Compute Express Link (CXL) 2.0 protocol decouples DRAM from the CPU, the final hardware element to be disaggregated. With native support for CXL, Liqid Matrix composable disaggregated infrastructure (CDI) software can now pool and compose memory in tandem with GPU, NVMe, persistent memory, FPGA, and other accelerator devices.

By making DRAM a composable resource over CXL fabrics, Liqid, Samsung, and Tanzanite showcase the efficiency and flexibility necessary to meet the changing infrastructure demands being driven by rapid advancements in artificial intelligence and machine learning (AI+ML), edge computing, and hybrid cloud environments.

Ben Bolles, executive director of product management at Liqid, says: “With the breakthrough performance provided by CXL, the industry will be better positioned to support and make sense of the massive wave of AI innovation predicted over just the next few years, and we ‘re excited to collaborate with Samsung and Tanzanite to illustrate the power of this new protocol.

“As our demonstration illustrates, by decoupling DRAM from the CPU, CXL enables us to achieve these milestone results in performance, infrastructure flexibility, and more sustainable resource efficiency, preparing organizations to rise to the architectural challenges that industries face as AI evolves at the speed of data.”

According to Gartner, AI use in the enterprise has tripled in the past two years, and by 2025, AI will be the top driver of infrastructure decision making as the AI ​​market matures, resulting in a 10x growth in compute requirements over that same period.

The adoption of CXL technology can significantly aid in addressing this exploding demand for expansive compute performance, efficiency, and sustainability not possible with traditional architectures.

This allows for previously static DRAM resources to be shared for exponentially higher performance, reduced software stack complexity, and lower overall system cost, allowing users to focus on accelerating time to results for target workloads as opposed to maintaining physical hardware.

With a wave of CXL-supported servers becoming commercially available, composability incorporated into any server refresh will both enable existing resources to continue to be utilized, while also deploying DRAM as a shared, bare-metal resource that can be utilized in tandem with accelerator technologies already available to Liquid Matrix CDI software. With knowledge of the urgent need for compute performance and efficiency, Liqid Matrix software leads the industry in recognizing CXL as a fabric type.

Samsung is the world leader in advanced memory technologies and has been collaborating with data center, server, and chipset manufacturers to develop CXL interface technology since the CXL Consortium was formed in 2019. Samsung’s newly unveiled DDR5-based CXL module is the industry’s first memory expansion module supporting the interface. CXL memory expansion technology scales memory capacity and bandwidth well beyond what is commercially available, enabling organizations to meet the demands of much larger, more complex workloads associated with AI and other evolving data center applications.

Cheolmin Park, VP of memory global sales and marketing at Samsung Electronics, says: “With new DDR5 based CXL memory modules, Samsung is helping to lay the foundation for a high-bandwidth, low-latency memory ecosystem designed to support and advance the modern computing era in which AI+ML is integrated into more day-to-day data center operations. We are growing the CXL ecosystem with Liqid and others to unlock the unprecedented infrastructure performance required to achieve breakthroughs in high-performance computing for our customers and the industry as a whole.”

Tanzanite’s architecture and the purpose-built design of its Smart Logic Interface Connector (SLIC) enable independent scaling and sharing of memory and compute in a low-latency pool within and across server racks. The Tanzanite solution provides a highly scalable architecture for exa-scale level memory capacity and compute acceleration.

For Dell Technologies World, the Liqid team collaborated with Samsung and Tanzanite Silicon Solutions engineers to demonstrate the potential for composable DRAM in real-world scenarios. The lab configuration consists of two Next-Gen Intel Xeon Scalable processor-based Archer City systems (codenamed Sapphire Rapids), along with Tanzanite’s SLIC implemented in an Intel Agilex FPGA, demonstrating clustered/tiered memory allocated across two hosts and orchestrated using Liqid Matrix CDI software.

Shalesh Thusoo, CEO, CTO, and founder of Tanzanite Silicon Solutions, says: “We’re excited to work with Liqid and Samsung to demonstrate our shared vision of the potential for the CXL protocol with Tanzanite’s industry-leading memory pooling solution, applicable to a broad range of emerging applications such as AI+ML, blockchain technology, and the metaverse. As the demonstration at Dell Technologies World shows, composability for DRAM via CXL has radical implications for how we architect our hardware ecosystems for better efficiency, performance, and flexibility in a world operating at data center scale.”

Leave a Reply

Your email address will not be published.