CONNECT WITH US

Nvidia's latest restrictions on CUDA stirs Chinese AI community

Staff reporter, Taipei, DIGITIMES Asia 0

Credit: DIGITIMES

Nvidia has taken further actions targeting the Chinese market to maintain its dominance in the GPU market, attempting to block third-party GPU companies from seamlessly using CUDA software. This move has sparked significant controversy and discussions within the Chinese artificial intelligence (AI) and chip communities.

Recently, Nvidia explicitly stated in the user license agreement for CUDA 11.6 that running CUDA through translation layers on other hardware platforms is prohibited. This declaration has shocked the AI community, potentially putting projects like ZLUDA, supported by competitors such as AMD and Intel, at legal risk and leaving many AI companies in China bewildered.

This action has stirred up a storm. The GPU industry has long suffered from what's known as "CUDA dependency" – where GPUs act as the "face" and CUDA serves as the "core". The synergy between the two has allowed Nvidia to solidify its CUDA ecosystem over the years, creating a moat that other GPU manufacturers find difficult to breach. However, some non-Nvidia GPU platform providers have been compatible with CUDA for years to meet market demands, leveraging Nvidia's ecosystem to support GPU developers.

Developers have become highly dependent on Nvidia GPU CUDA for its stability, avoiding the need to switch platforms and familiarize themselves with potential bugs on other platforms.

Observers within the industry note that Nvidia's prohibition of CUDA compatibility suggests the company is aware of potential threats from other competing manufacturers. After all, CUDA compatibility could narrow the gap with Nvidia's ecosystem. Therefore, it is no surprise that Nvidia has taken further steps to consolidate its dominant position in accelerated computing.

However, some local GPU manufacturers in China have adopted a strategy of ensuring smooth user migration by being compatible with CUDA. Nonetheless, manufacturers such as Moore Threads and Biren Technology are developing their software ecosystems.

In response to market rumors, Moore Threads was the first to release a statement, asserting that its MUSA architecture is unrelated to CUDA. Moore Threads' MUSA/MUSIFY does not involve any clauses related to Nvidia's End User License Agreement (EULA), thus developers can confidently use it.

Moore Threads emphasizes that MUSA is a fully autonomous GPU advanced computing unified system architecture developed independently with full intellectual property rights. It has no dependency on CUDA. MUSA is the unified system architecture adopted by the Moore Threads product series, integrating a unified programming model, software runtime library, driver framework, instruction set architecture, and chip architecture.

Furthermore, Moore Threads points out that the development tool tailored for MUSA—MUSIFY workflow—is completely independent of Nvidia's CUDA ecosystem and therefore not bound by Nvidia's terms.

Industry analysts suggest that although many Chinese companies are attempting to be CUDA-compatible, their specific measures involve pre-translating CUDA programs into their own programs, which are then run using their own cards. For instance, AMD's HIP/ROCm and Hygon's ecosystem do this – they need to convert CUDA programs into their own before they can be executed. Hence, there is limited impact and no substantial problems arise from this.

Currently, China's local Hygon has an independent ecosystem based on ROCm, which can continuously leverage the power of the local open-source community; Hygon's self-built ecosystem based on CANN can achieve CUDA alternatives at a relatively lower cost in the AI field.