CONNECT WITH US
NEWS TAGGED RUBIN
Wednesday 15 April 2026
Taiwan equipment makers ride advanced packaging and SiPh wave
Nvidia is accelerating the commercial rollout of silicon photonics (SiPh) technology, marked by the launch of its Rubin Ultra platform and the gradual establishment of co-packaged...
Wednesday 15 April 2026
Nscale redirects OpenAI's Stargate Norway capacity to Microsoft
Hyperscaler Nscale has agreed to rent data center capacity to Microsoft at its campus in Narvik, Norway. Located in the Arctic Circle, the site was previously intended for OpenAI as...
Wednesday 15 April 2026
SK Hynix may cut Nvidia HBM4 shipments as Rubin ramp reportedly faces delays

SK Hynix is reportedly considering reducing its planned 2026 shipments of high-bandwidth memory (HBM4) to Nvidia by about 20-30%, amid...

Tuesday 7 April 2026
Global AI chip suppliers compete as TSMC remains top foundry partner
As the artificial intelligence (AI) era advances, approximately 133 companies are actively developing or selling AI chips, according to a SEMIEcosystem report citing Jon Peddie Research...
Tuesday 31 March 2026
Vera Rubin compute tray design unfinalized as Nvidia pushes supply diversification
Sources in the passive component supply chain report that Nvidia's next-generation platform architecture, Vera Rubin, is scheduled to enter mass production in the third quarter of...
Thursday 26 March 2026
Nvidia and Emerald AI partner with utilities to build grid-responsive AI data centers
Nvidia and Emerald AI said on Tuesday that they are joining forces with a group of major US power producers — including AES Corporation, Constellation Energy, Invenergy, NextEra...
Friday 20 March 2026
Nvidia strategy at GTC 2026 blocks ASIC rivals with Groq deal
At GTC 2026, Nvidia unveiled a strategic US$20 billion partnership with AI chip startup Groq, licensing its LPU technology and hiring key team members to integrate advanced inference...
Friday 20 March 2026
Nvidia to supply 1 million GPUs to Amazon through 2027 in landmark AI cloud deal
Nvidia will supply 1 million graphics processing units (GPUs) to Amazon.com's cloud computing division by 2027 in one of the largest artificial intelligence infrastructure agreements...
Friday 20 March 2026
Nvidia Vera Rubin servers to drive liquid cooling demand
Nvidia has revealed more details about its next-generation Vera Rubin (VR) servers at Nvidia GTC 2026, confirming a full transition to liquid cooling architecture. Thermal module makers...
Thursday 19 March 2026
Analysis: GTC 2026 widens US-China AI compute gap
The annual Nvidia GTC conference has become a global barometer for the artificial intelligence (AI) industry. In a nearly two-hour keynote, Nvidia CEO Jensen Huang laid out a clear...
Thursday 19 March 2026
Nvidia positions Groq 3 LPUs alongside Vera Rubin for an inference-first era
The 2026 Nvidia GTC keynote signaled a clear industry shift toward inference. CEO Jensen Huang declared that "training is just the beginning — inference is the core battleground...
Thursday 19 March 2026
Intel Xeon 6 wins CPU slot in Nvidia DGX Rubin, stakes claim in AI inference stack
Intel's Xeon 6 processors have been selected as the host CPU for Nvidia's DGX Rubin NVL8 system — a move announced at GTC 2026 that gives concrete form to the two companies'...
Thursday 19 March 2026
Nvidia standardizes Vera Rubin liquid cooling, names four cold plate suppliers
Nvidia plans to launch its next-generation AI server architecture, Vera Rubin, in the second half of 2026, with liquid cooling set to become standard. The company will centralize procurement...
Thursday 19 March 2026
Taiwan's AI supply chain scales at GTC 2026: Foxconn, Wiwynn, Advantech, BizLink deepen Nvidia ties
Taiwanese hardware vendors expanded their presence at Nvidia GTC 2026, underscoring a coordinated push into AI servers, edge computing, robotics, and data center infrastructure. Foxconn,...
Thursday 19 March 2026
Groq anchors Nvidia's inference strategy; CPU redefines architecture for AI agents
As AI evolves from generating information to executing tasks, inference scenarios characterized by coding agents and requiring low latency and high throughput are ushering in the next...