SoftBank to build supercomputer for generative AI

Chiang, Jen-Chieh, Taipei; Jingyue Hsiao, DIGITIMES Asia 0

Credit: AFP

NHK and Nikkei reported that SoftBank, the telecom subsidiary under SoftBank Group, planned to invest JPY20 billion (US$140 million) and use Nvidia's GPU to build a supercomputer for specialized applications, which will have the highest computing power for generative AI in Japan.

After establishing the supercomputer, SoftBank will increase the parameter size of its LLM (large language model) from 1 billion to 60 billion. Despite having a smaller size than ChatGPT-3's 175 billion parameters, SoftBank's LLM will not require as many parameters as ChatGPT-3 as the former will be used for special applications, such as finance and medical, rather than general purposes like ChatGPT-3.

SoftBank expects its generative AI to be commercialized in a couple of years, initially for enterprise clients before entering the consumer market. Besides SoftBank, another SoftBank Group subsidiary line is collaborating with South Korea-based Naver, developing an LLM that supports Japanese and Korean languages.

In March 2023, SoftBank established a new subsidiary, planning to build a supercomputer in fiscal 2023 (April 2023 to March 2024) before using the supercomputer for the development of generative AI as well as renting out computing power to other companies and charging fees for it.

In May, SoftBank and Nvidia announced a collaboration for data centers using Nvidia's GH200 Grace Hopper Superchip for generative AI and 5G/6G. In June, Masayoshi Son, Chairman and CEO of SoftBank, announced a strategy to ramp up investments in developing AI and related applications. Masayoshi Son, the President of SoftBank Group, announced the accelerated development of AI and related applications at the shareholders' meeting in June 2023.