CONNECT WITH US

Research Insight: handset industry to introduce Gemini Nano faster, pure-play foundries and memory IDMs to benefit

Luke Lin, Tapiei; Jingyue Hsiao, DIGITIMES Asia 0

Credit: AFP

On December 6, Google unveiled its large language model (LLM) Gemini, intending to integrate it into various Google products and services. DIGITIMES Research believes that integrating Gemini Nano into Android will expand the range of products offering on-device generative AI, from flagship models to mid-to-high-end devices, within 2-3 years, and help address the fragmented on-device AI within the Android ecosystem.

Based on the different parameter sizes, the recently released Gemini 1.0 has three versions, including Gemini Ultra, Gemini Pro, and Gemini Nano, with Gemini Nano as a distilling model with much fewer parameters catering to on-device LLMs. Gemini Nano-1 has only 1.8 billion parameters, while Nano-2 has 3.25 billion parameters.

DIGITIMES Research believes that given the greater flexibility, the pace for mobile devices, such as smartphones and tablets, to support on-device LLM and generative AI will be faster than that of PCs as PC hardware manufacturers are often constrained by the rules set by Microsoft. In contrast, in collaboration with chipmakers, the handset industry players have begun introducing on-device LLM and generative AI applications in flagship models starting in the latter half of 2023.

At first glance, Google's integration of Gemini Nano into Android echoes the steps taken by the handset industry. However, it's essential to highlight that introducing Gemini Nano into the Android ecosystem and enabling on-device LLM and generative AI applications through SDK tools and APIs presents expanded opportunities for third-party developers to create innovations. As diverse on-device AI applications emerge, Android device manufacturers and chip providers will have clear guidelines to follow in hardware development collaboration.

DIGITIMES Research estimates that mobile products that support on-device LLM and generative AI will expand from flagship to mid-to-high-end models during 2024-2026, with shipments expected to surge.

Mobile devices supporting on-device LLM and generative AI increase the requirement of AI computing capability on devices. This leads to an increase in transistors within the main processor's NPU tile and its share of the die area, consequently resulting in a substantial expansion of the die size and increased wafer consumption. Pure-play foundries, notably TSMC, are anticipated to benefit.

The development also requires more DRAM and NAND Flash consumption as devices will need higher-capacity memories, benefiting major memory IDMs, such as Samsung Electronics, SK Hynix, and Micron. Besides, cooling solution and battery providers are likely to gain, too.

According to DIGITIMES Research, Gemini Nano is poised to resolve the challenge of fragmentation within on-device AI in smartphones as it addresses the scenario where smartphone manufacturers hastily introduced AI applications in select products, hindering the ecosystem's growth. On the other hand, it's equally important to keep an eye on what Apple will introduce in 2024 as part of this trend.