Cerebras partners with Dell to challenge Nvidia's AI dominance

Ollie Chang, Taipei; Jay Liang, DIGITIMES Asia 0

Credit: Cerebras Systems

US chip design startup Cerebras Systems Inc. recently announced a partnership with Dell.

Cerebras plans to enter the enterprise market with its latest compute system, the CS-3, powered by the Wafer-Scale Engine 3 (WSE-3). This move aims to challenge Nvidia's dominance in the Artificial Intelligence (AI) market by offering diverse AI compute options.

According to reports from Fierce Electronics and announcements from Cerebras, the partnership with Dell will provide AI systems and supercomputers, advanced Large Language Model (LLM) training, and expert machine learning services.

Dell's rack PowerEdge R6615 servers, powered by AMD's EPYC 9354P processors, will be used in Cerebras AI supercomputers. This will enable enterprises to train Machine Learning (ML) models at a scale far beyond current advanced technology levels.

Andrew Feldman, co-founder and CEO of Cerebras, stated that the partnership would significantly expand their global sales channels. Feldman has claimed that Cerebras' AI accelerator technology can offer 880 times the memory capacity of GPUs, reducing the code required to build LLMs by up to 97%.

In May 2024, Cerebras announced breakthroughs in molecular dynamics simulations in collaboration with researchers from three US national laboratories—Sandia, Lawrence Livermore, and Los Alamos—using the Wafer Scale Engine 2 (WSE-2). The simulation speed was 179 times faster than the Frontier supercomputer consisting of 39,000 GPUs.

Cerebras' processor chip integrates a large number of compute cores on a single wafer, forming a massive AI accelerator chip.

Jack Gold, president and analyst of consulting firm J. Gold Associates, noted that Cerebras' wafer-scale processing technology offers advantages in compute speed and memory utilization, providing substantial parallel computing capabilities on a single chip. This reduces the need for large-scale high-speed interconnects between modules such as Nvidia's H100.

Nvidia's GPU interconnect technology, NVLink, has been crucial to its prominence in the AI accelerator market. It has prompted competitors such as AMD and Intel to establish the Ultra Accelerator Link (UALink) Alliance, aiming to create an open-source alternative to NVLink.

The competition between Nvidia and Cerebras reflects two paths for the development of AI accelerators. The former packages more parallel computing GPUs, and the latter is pursuing a centralized, massive single-chip approach.

Feldman believes most companies prefer not to rely on a single supplier in hopes of more diverse solutions in the market. This presents Cerebras with significant opportunities and room for development.