As global AI computing platforms continue to evolve toward higher density, increased power demands, and rack-scale integration, Chenbro unveiled its latest AI server chassis solutions at NVIDIA GTC 2026. The showcase features the NVIDIA MGX 1U Compute Tray for NVIDIA Vera Rubin NVL72 systems, along with a comprehensive product portfolio based on NVIDIA MGX architecture, demonstrating Chenbros integrated capabilities spanning system design, thermal optimization, and rack-level deployment.
Building on its close integrations with the NVIDIA ecosystem, Chenbro presented multiple server chassis solutions designed in alignment with NVIDIA reference architectures and the NVIDIA MGX architecture. These solutions address high-density computing, liquid cooling technologies, and enterprise-grade deployment requirements, enabling cloud service providers(CSPs) and enterprise data centers to accelerate AI infrastructure deployment.
NVIDIA MGX 1U Compute Tray for NVIDIA Vera Rubin NVL72- A New Standard for High-Density AI Computing
As next-generation AI training and inference workloads continue to scale rapidly, data centers face increasing demands for space efficiency and advanced thermal design. At NVIDIA GTC 2026, Chenbro highlights the NVIDIA MGX 1U Compute Tray for Vera Rubin NVL72, engineered with optimized mechanical design and airflow management to deliver high-density computing within a 1U footprint. The solution is tailored for large-scale model training and high-performance inference applications.
Chenbro also showcases:NVIDIA MGX 1U for NVIDIA GB200 NVL4, supporting high-performance inference and flexible deployment.NVIDIA MGX 2U Vera Server Chassis, offering enhanced scalability and enterprise-level integration options.
Together, these solutions demonstrate Chenbro's system integration expertise under the NVIDIA MGX architecture framework.
NVIDIA MGX Rack- Advancing from System Integration to Rack-Scale Deployment
As AI infrastructure expands from individual systems to rack-scale environments, Chenbro highlights its integration capabilities based on NVIDIA MGX architecture. The company leverages its strengths in mechanical design and mass production to support the deployment of high-density AI computing environments in modern data centers.
By closely aligning with NVIDIA reference architectures and the broader NVIDIA MGX ecosystem, Chenbro enables customers to extend from system-level builds to rack-level integration, accelerating scalable AI infrastructure deployment.
NVIDIA MGX 6U Liquid-Cooled Server Chassis- Liquid Cooling for High-Power AI Platforms
In response to power consumption and energy efficiency requirements for AI platforms, Chenbro presents the NVIDIA MGX 6U Liquid-Cooled Server Chassis. Designed to support high-power liquid-cooled deployment architectures, the solution enhances thermal efficiency and improves data center power usage effectiveness (PUE) through optimized mechanical and fluid management design, ensuring stable operations for next-generation AI data centers.
Additional showcased solutions include:
NVIDIA MGX 4U Air-Cooled Server Chassis and NVIDIA MGX 2U Short-Depth Server Chassis.
These offerings provide flexible deployment options for enterprise server rooms and diverse application scenarios.
Strengthening Global Footprint and AI Infrastructure Capabilities
Chenbro CEO Corona Chen stated that as AI applications continue to expand, market demand for high-density computing, liquid cooling, and rack-scale integration capabilities is rapidly increasing. Chenbro will continue deepening its collaboration within the ecosystem, strengthening its integrated R&D and manufacturing capabilities to help customers accelerate the adoption of next-generation AI platforms.
With its global manufacturing footprint and localized service capabilities, Chenbro is committed to supporting diverse deployment needs across AI, HPC, and data center applications—driving sustained operational growth and industry competitiveness.
At GTC 2026, Chenbro not only presents its product portfolio but is supporting NVIDIA AI platforms from system integration and thermal optimization to rack-scale deployment, enabling scalable AI infrastructure development worldwide.

Chenbro supports diverse AI, HPC, and data center deployment needs. Credit: Chenbro


