Nvidia plans to shift the AI compute battleground from training to inference by integrating language processing unit technology and offering multiple inference chips, with OpenAI agreeing to be a major customer, according to a Wall Street Journal rep...
The article requires paid subscription. Subscribe Now


