CONNECT WITH US

Analysis: NVLink licensing reshapes semiconductor alliance; potential Broadcom truce

Amanda Liang,  0

Credit: DIGITIMES

In September 2025, Nvidia CEO Jensen Huang made a rare joint livestream appearance with Intel CEO Pat Gelsinger to announce a US$5 billion equity investment in Intel. In March 2026, Nvidia followed up with a US$2 billion investment in Marvell Technology. Why Huang is investing in potential competitors so aggressively remains a question.

The agreements explicitly require Marvell to support Nvidia's licensed NVLink Fusion interface. More specifically, Marvell will provide customized XPUs along with scale-up interconnect networks compatible with NVLink Fusion. However, this does not necessarily mean Marvell will support Nvidia's NVSwitch. The wording of the announcement suggests that Marvell may instead support the NVLink protocol via UALink or PCI-Express 6.0 switches.

NVLink Fusion is also a key element in Nvidia's collaboration with Intel. Huang is putting AI factories as the next five-year strategic focus. In this industrial transformation, each AI factory will have different requirements, and the optimal AI infrastructure configuration will vary. Since 2025, Nvidia has steadily expanded access to NVLink Fusion for this emerging AI factory ecosystem.

NVLink is Nvidia's proprietary high-speed interconnect technology that enables GPUs in data centers to function as a unified system through enhanced communication bandwidth. Compared to PCIe 5.0 (128 GB/s), NVLink delivers up to 14 times greater bandwidth but historically has only supported Nvidia's own ecosystem.

Now, Nvidia is partially opening this technology, allowing customers to semi-customize their AI factories. This includes pairing preferred CPUs with Nvidia GPUs or integrating Nvidia GPUs with other custom AI chips.

While alternatives like Google's TPU may offer lower total cost of ownership, Nvidia's competitive advantage lies in its CUDA ecosystem and its expanding software stack. NVLink Fusion effectively opens Nvidia's AI ecosystem to new entrants, particularly those aiming to compete in the ASIC market. However, this strategy also raises concerns about potentially diluting demand for Nvidia's own products, which helps explain skepticism around Huang's outreach to competitors like Intel and Marvell.

Huang's priority is not simply selling more chips; it is bringing the broader AI industry into the CUDA ecosystem. In theory, NVLink Fusion could enable seamless integration of platforms such as Google TPU, AWS Trainium, Microsoft Maia, and Meta's MTIA into AI factory architectures.

In practice, companies like MediaTek and Marvell have already committed to building custom AI chips using NVLink Fusion, while Fujitsu and Qualcomm plan to develop custom CPUs around it. Among major semiconductor players, Intel has joined the ecosystem, while AMD may never participate. This raises the question of whether Broadcom will be next.

At Computex 2025, Huang unveiled a series of AI concepts and products, but NVLink Fusion may ultimately prove to be the most impactful development when viewed over the next five to ten years.

Broadcom's position is unique. It designs TPUs for Google, produces TPU racks for Anthropic, and manufactures Meta's MTIA XPU. Reports suggest that ByteDance and Apple are also key XPU customers, while OpenAI has commissioned Broadcom to develop its Titan XPU.

If these hyperscale cloud service providers (CSPs) seek to adopt NVLink Fusion, the question becomes whether Nvidia and Broadcom could reconcile and reach an agreement. Such a deal would likely involve deep technical collaboration, similar to Nvidia's arrangement with Marvell.

While Broadcom remains a competitor to Nvidia in scale-up networking, it is also a critical supplier across multiple domains. Given its dominance in Ethernet switch ASICs and its rapidly growing custom XPU business, Broadcom may be the only player capable of effectively counterbalancing Nvidia's expanding influence.

Article translated by Emily Kuo and edited by Jack Wu