CONNECT WITH US

ASUS expands AI infrastructure solutions, showcasing AI factory vision at Computex 2025

News highlights 0

At COMPUTEX 2025, ASUS showcased its comprehensive AI infrastructure solutions, targeting the computing needs of data centers, edge computing, and a wide range of intelligent applications. With a robust product portfolio, ASUS brings to life the NVIDIA AI Factory vision, illustrating how AI will be deeply embedded into enterprise IT infrastructure. This evolution continues to drive forward applications and services that shape the future of AI and high-performance computing.

Comprehensive Showcases: From Hardware to Data Center Management

ASUS's booth centered around a diversified server product line and integrated hardware-software systems. These are paired with data center management and monitoring solutions, presenting a well-rounded, detailed exhibit. In particular, ASUS features purpose-built architectures aligned with the NVIDIA AI Factory concept, designed to optimize next-generation data center infrastructure.

As future AI applications require increasingly massive token processing capabilities, computing demands grow significantly. ASUS's new design architecture focuses on boosting performance and AI processing power while simultaneously addressing the challenges of thermal management. The booth highlighted versatile cooling options, including air- and liquid-cooled cabinet systems, making this one of the key attractions for visitors.

Complete Lineup: NVIDIA HGX, NVIDIA MGX, and NVIDIA Blackwell Architectures for Rapid AI Infrastructure Growth

At the entrance of the exhibition space, ASUS showcased its AI POD full-rack system based on NVIDIA HGX systems, supporting NVIDIA Blackwell and Blackwell Ultra GPUs. Known for its scale-up capabilities in a single server, the HGX system is a critical solution for high-performance computing needs.

Next comes the NVIDIA MGX-based solutions, utilizing PCIe to deliver scalable and flexible AI computing power. ASUS emphasizes its certified, reference-architecture-based product line, validated by NVIDIA for performance and reliability, helping customers build independent, tailored AI infrastructure suited for various use cases.

This year's highlight is the latest NVIDIA Grace Blackwell architecture. ASUS presents its NVIDIA GB200 NVL72 rack solution, targeting high-end AI POD full-rack systems featuring the NVIDIA GB200 and GB300 platforms. This powerful setup includes 72 Blackwell Ultra GPUs and 32 NVIDIA Grace CPUs, forming a system with 18 compute trays and 9 switch trays. Notably, the entire rack is equipped with a full liquid cooling system, maximizing computing density and energy efficiency, showcasing the next generation of performance-driven design.

Software Tools Empower Enterprise AI Development

On the software front, ASUS supports customers from the early planning and deployment of AI server racks through infrastructure management and monitoring. Solutions such as ACC and AIDC offer one-stop deployment and operations support, addressing the entire data center lifecycle.

ASUS also presented its collaboration with NVIDIA in deploying Agentic AI systems, custom AI agents capable of reasoning, planning, and action-taking. The integration of NVIDIA AI Workbench enables development teams to flexibly build applications on local or remote GPUs, streamlining workflows from experimentation and prototyping to proof-of-concept. This delivers an accessible, robust toolkit for enterprise AI development.

Additionally, inspired by ChatGPT, ASUS introduces AI Hub, a software solution that simplifies LLM (Large Language Model) usage for enterprise teams. Now part of ASUS's AI infrastructure portfolio, AI Hub provides pre-integrated tools, open-source software, and LLM resources. It also supports collaboration with AI model developers, cloud service providers, and system integrators, helping enterprises more easily build and scale their AI applications and accelerate their innovation journey.

Article edited by Jack Wu