Nvidia has seen a ramp-up in orders for its A100 and H100 AI GPUs, leading to an increase in wafer starts at TSMC, according to market sources.
The AI boom brought on by ChatGPT has caused demand for high-computing GPUs to soar. Coupled with the US ban on AI chip sales to China, major Chinese companies like Baidu are buying up Nvidia's AI GPUs.
Nvidia recently announced a partnership with ServiceNow to develop powerful enterprise-level generative AI functions that will transform business processes by automating workflows to make them faster and smarter.
ServiceNow uses Nvidia's software, services, and accelerated infrastructure to develop customized large language models (LLM) for its smart end-to-end digital transformation platform. ServiceNow also helps Nvidia simplify its IT operations with these generative AI tools by using Nvidia's data to customize the foundation model running on its hybrid cloud infrastructure, which consists of the Nvidia DGX cloud and locally deployed Nvidia DGX SuperPOD AI.
Nvidia's DGX H100 series began shipping in May and continues to receive large orders. The DGX H100 is part of the make up of the Tokyo-1 supercomputer in Japan, which will use simulations and AI to speed up the drug discovery process. Each Nvidia H100 Tensor Core GPU equipped in the DGX H100 series performs on average six-times higher than previous GPUs. The DGX H100 is equipped with eight GPUs, all with a transformer engine to accelerate generative AI models. Compared with the DGX A100, the DGX H100 is twice as energy efficient in kilowatts per petaflop.
Several of Nvidia's partners worldwide are already supplying the DGX H100 system, DGX POD, and DGX SuperPOD.
The H100 GPU adopts the Hopper architecture and uses TSMC's 4nm process, while the A100 uses TSMC's 7nm process. Nvidia has modified both models for export to the China market as the H800 and A800. Nvidia is following up with TSMC, as its on-hand orders are reportedly full. Nvidia's orders will help TSMC fill a capacity gap created by order cuts from MediaTek.