The AI infrastructure industry is intensifying competition as demand for AI server capacity surges, raising concerns about a potential AI bubble. Despite enterprises committing to long-term leasing contracts with major cloud and software companies, server shortages highlight the ongoing urgency for AI hardware deployment and signal substantial, sustained market demand.
Pegatron recently secured new AI-related orders and announced a strategic partnership with AI native cloud service provider (CSP) Together AI and AI digital infrastructure supplier 5C Group to jointly develop large-scale AI infrastructure. The Taiwan-based electronics manufacturer plans to begin shipping its GB300 product line, optimized for high-wattage GPUs, starting in the third quarter of 2026, with shipment volumes expected to grow steadily as GPU adoption expands.
AI data center technologies advance with new cooling trends
At Supercomputing 2025 (SC25), which spotlights high-performance computing developments, Pegatron highlighted the emerging dominance of Coolant Distribution Unit (CDU) designs, driven by Nvidia's leadership. Liquid cooling systems have extended beyond niche, high-end deployments to broader enterprise and AI applications. While many in the industry were cautious in 2024, 2025 is expected to confirm near-universal adoption of liquid cooling solutions, setting the stage for 2026 to become a breakthrough year for widespread deployment.
Mei-Hui Wang, vice president of Pegatron's 18th Business Division, noted that the rollout of generative AI initially appealed primarily to consumers, but enterprise willingness to invest in AI solutions is beginning to rise. She cited applications including factory maintenance and corporate operations, which often require new data centers and upgraded cooling infrastructure. Wang anticipates a significant increase in companies building proprietary AI infrastructure in 2026.
Rise of 'Neo Cloud' providers reflects urgent AI compute demand
Pegatron identified the rapid growth of AI native CSPs such as Lambda, CoreWeave, and Together AI over the past two years. These emerging "Neo Cloud" firms specialize in leasing AI compute power with exceptionally short deployment cycles, able to deliver tailored hardware solutions within a single quarter, outpacing traditional CSPs. AI software developers like OpenAI prefer to lease compute resources rather than invest in building and managing their own data centers, boosting demand for these agile cloud players.
Major cloud industry leaders, including AWS, Microsoft, Google, and Meta, also source compute power from Neo Cloud providers to supplement their capacities. Wang explained that this trend reflects the urgent need for AI compute resources. Established CSPs are struggling to deploy new data center infrastructure rapidly across diverse regions and often sign contracts extending up to three years in advance, indicating genuine demand despite delivery delays.
Wang addressed recent market speculation about the emergence of an AI bubble, contending that the current situation exemplifies authentic market growth and opportunity. The inability to meet immediate supply from existing capacity and the prevalence of long-term agreements confirm that AI infrastructure demand remains robust rather than speculative.
In addition to collaborating closely with leading AI chip manufacturers, Pegatron emphasized that designing AI servers involves sophisticated integration of advanced cooling, power management, and liquid cooling technologies. The company aims to develop a comprehensive industrial ecosystem that includes automated manufacturing, design services, global logistics, and customer partnerships. This strategic positioning seeks to establish Pegatron as a co-builder of AI data centers rather than merely a server supplier, underscoring the complexity and scale required to meet accelerating AI infrastructure demands.
Article edited by Charlene Chen


