CONNECT WITH US

On-premise or cloud computing: navigating AI investments amid hardware and software upgrades

Ines Lin, Taipei; Jerry Chen, DIGITIMES Asia 0

Credit: DIGITIMES

Rapid advancements in Large Language Models (LLMs) go hand in hand with ongoing upgrades in computing hardware.

For enterprise users, the question arises: when is the best moment to dive into new product investments? Experts advise keeping fixed and variable computational requirements in consideration, maneuvering to sidestep potential losses stemming from sudden supplier price drops.

More iterations, more choices

The demand for significant computational and storage resources of LLM computations has led chipmakers, server providers, and public cloud operators to unveil fresh product lines. Simultaneously, various alliances among Small and Medium-sized Enterprises (SMEs) have surfaced, focusing on energy-efficient and economical chips, edge servers, and related products.

Navigating the deluge of new AI-related information poses challenges for enterprises' selection of software and hardware. Sega Cheng, Co-founder, and CEO of the Taiwan-based iKala cautions enterprises against the wholesale approach of adopting top-tier resources all at once.

Take GPUs as an example; Nvidia, AMD, and others are rapidly iterating, with new products hitting the market nearly every quarter. The rush to procure new products may risk enterprises significant losses when older models see rapid price reductions in the short term.

"A phased procurement approach is preferable," says Cheng. He stresses planning according to fixed and floating computational power needs would be necessary, with some flexibilities of incremental purchases and others made on demand.

iKala's approach involves procuring a small number of GPUs for AI model training while still utilizing cloud computing resources.

While the cloud offers hardware cost savings, relying entirely on the cloud or entirely on-premises may incur high costs. Thus, Cheng advocates for a hybrid cloud approach as the optimal choice.

Software talent shortage

On the flip side, software procurement may also remain a challenge amid SMEs' technical talent drought.

Representatives of SMEs have revealed difficulties in seeking assistance when encountering issues with new features after procuring AI solutions from certain cloud service providers (CSPs). These providers themselves often lack service personnel in Taiwan, and cooperating entities may not always keep pace with each new service.

Microsoft recently released "The State of AI Infrastructure" report, highlighting talent shortages as a primary concern for enterprises when adopting new AI technologies, with infrastructure upgrades posing another common challenge.

The report indicates that enterprises may have differing definitions of AI infrastructure needs, such as integrating new technologies into existing IT systems or procuring software, hardware, and related tools. When selecting infrastructure suppliers, considerations should include functionality, security, and cost.

Amidst the AI wave, hardware vendors related to infrastructure stand as early beneficiaries. In contrast, plenty of space remained for technology consulting services to thrive.

German market research firm IoT Analytics released a report on the genAI market at the end of 2023, categorizing providers into areas such as data center chips, models and platforms, and peripheral services. Data indicates that major players dominate the data center and model platform sectors, while related services still represent a largely untapped market.