CONNECT WITH US

Broadcom unveils Jericho4 at OCP Taipei, pushes Ethernet as backbone of next-gen AI infrastructure

Joseph Chen, DIGITIMES Asia, Taipei 0

Broadcom has unveiled Jericho4, its latest AI fabric router chip designed to connect geographically dispersed data centers, reinforcing the company's belief that Ethernet—not proprietary interconnects—will power the next wave of large-scale AI and machine learning systems.

The announcement was made during the 2025 OCP APAC Summit in Taipei, where Ram Velaga, Broadcom's Senior Vice President and General Manager, outlined a three-tiered approach to scaling AI infrastructure—from intra-rack communication to global data center clusters.

"AI is still at the one-percentile mark of what's possible," Velaga said on stage, arguing that Ethernet's ubiquity, cost-efficiency, and openness make it the best choice for scaling AI workloads across every layer of the stack. "The best way to build a large distributed computing system is to do it on Ethernet."

Credit: Joseph Chen

Credit: Joseph Chen

Jericho4: Connecting data centers over 100 km apart

At the heart of the Taipei announcement was Jericho4, a 3nm multi-die AI fabric router chip designed specifically for inter-data center connectivity. It supports line-rate, full-speed encryption, very deep buffering, and high-bandwidth memory (HBM) to facilitate the movement of AI workloads between facilities separated by more than 100 kilometers. As AI model sizes grow, Broadcom sees a need to link multiple 50–60 MW data centers to function as unified compute clusters.

Jericho4 fills that gap—marking a departure from the company's Tomahawk product line, which focuses on intra-data center switching.

Scaling AI with Ethernet: Rack to region

Velaga's presentation also reiterated Broadcom's broader Ethernet strategy for AI:

Within racks (Scale-Up Ethernet): Broadcom's Scale-Up Ethernet (SUE) specification enables low-latency communication between XPUs (GPUs, TPUs, etc.) and HBM within a rack or a few racks. The company's Tomahawk Ultra switch achieves under 400 nanoseconds of XPU-to-XPU latency, with about 250 nanoseconds inside the switch, helping scale up domains to hundreds or even thousands of XPUs.

Across data centers (Scale-Out Ethernet): For larger clusters within a single data center, Broadcom introduced the Tomahawk 6, delivering 100 terabits per second of bandwidth. Velaga said it can reduce optical components by 67%, cut power usage, and lower network complexity for installations like 128,000-GPU clusters.

The company's pitch centers on Ethernet as a scalable, vendor-neutral alternative to proprietary AI interconnects. Broadcom continues to invest in open standards, contributing the SUE specification to the Open Compute Project (OCP) community to foster multi-vendor innovation across hardware and software layers.

Credit: Joseph Chen

Credit: Joseph Chen

Article edited by Jack Wu