Nvidia has halted production of artificial intelligence (AI) chips intended for the Chinese market and redirected manufacturing capacity at TSMC to its next-generation Vera Rubin platform, as regulatory barriers in both the US and China continue to cloud prospects for sales to Chinese customers, the Financial Times reported.
The shift away from H200 chips toward the Vera Rubin architecture suggests Nvidia no longer expects significant demand for the processors in China in the near term after months of uncertainty over export approvals from Washington and potential restrictions in Beijing.
Regulatory uncertainty drives strategic pivot
The H200, one of Nvidia's earlier-generation AI processors, had been positioned to comply with US export controls on advanced semiconductors. Vera Rubin, unveiled earlier this year, represents the company's latest chip architecture designed to support more complex AI systems and is expected to see strong demand from major US technology companies, including OpenAI and Google.
Washington has tightened restrictions on exports of advanced semiconductors to China, while Beijing has signaled it may curb imports to support domestic chipmakers.
"Instead of waiting in limbo, Nvidia has to move on to what it can achieve with certainty, especially when there's a shortage of supply for its advanced stuff," one person familiar with the plans said. "This could, in a way, accelerate the Vera Rubin delivery and roll out."
Nvidia had previously lobbied both Washington and Beijing to allow sales of H200 chips in China. After US President Donald Trump indicated in December that such sales could be permitted, the company began ramping up production in anticipation of orders from Chinese customers.
The company had expected demand of more than one million units from China, with suppliers preparing for deliveries as early as March. In early January, Nvidia CEO Jensen Huang said demand for the chips was "very high" and that the company had ramped up its supply chain.
H200 exports stalled despite limited approvals
The approval process later stalled as US officials sought tighter safeguards to prevent Chinese use of advanced chips in ways that could threaten national security. At the same time, Beijing has considered restricting purchases of H200 chips to encourage local AI developers to adopt processors from domestic semiconductor companies.
During an earnings call last week, Nvidia CFO Colette Kress said that while the US government had granted licenses allowing "small amounts" of H200 chips to be shipped to China, the company had not yet generated revenue from those approvals.
"We do not know whether any imports will be allowed into China," Kress said.
A US Commerce Department official told Reuters last month that none of Nvidia's H200 chips had been sold to Chinese customers despite export licenses. Although the Trump administration formally allowed sales of H200 chips to China in January, shipments remained stalled due to guardrails built into the approval process.
Nvidia has already produced about 250,000 H200 chips. If both Washington and Beijing ultimately allow only limited orders, the existing inventory could be sufficient to meet demand.
China's President Xi Jinping and US President Donald Trump are scheduled to meet later this month, raising speculation that export controls on advanced chips could be revisited. If restrictions were eased, Nvidia could take up to three months to reallocate supply chain capacity to resume H200 production.
Vera Rubin reshapes AI supply chain
The shift to the Vera Rubin platform is also reshaping the broader AI semiconductor supply chain.
According to Korean media reports, Samsung Electronics and SK hynix are emerging as leading suppliers of high-bandwidth memory (HBM) for Nvidia's next-generation AI accelerators based on the Vera Rubin architecture.
The companies are expected to supply HBM4, the sixth generation of high-bandwidth memory, which Nvidia has identified as a key component supporting the performance of its next-generation AI accelerators.
Industry sources cited by Korean media said Micron Technology was not currently listed among suppliers for the flagship Vera Rubin accelerator, although the company could supply memory for mid-range products within the broader Rubin-series lineup.
Memory suppliers compete for next-generation AI accelerator
Nvidia's dominance in the AI accelerator market has intensified competition among memory manufacturers seeking to join its HBM supply chain.
The Vera Rubin platform is expected to feature 16 stacks of HBM4 with a total memory capacity reaching 576GB, higher than the 432GB capacity planned for AMD's next-generation MI450 accelerator.
Samsung Electronics has reportedly passed key stages of Nvidia's HBM4 qualification testing, while SK Hynix is continuing optimization work with Nvidia as part of final testing procedures.
Given that HBM4 production typically requires more than six months from DRAM wafer input through packaging, industry observers expect the two companies to begin mass production as early as this month.
Nvidia declined to comment on the report, while TSMC also declined to comment when contacted by Reuters.
Article edited by Jack Wu


