Demand for high-performance compute and storage for AI training and inference continues to climb. Phison has partnered with AI infrastructure management software provider Infinitix to integrate its aiDAPTIV+ intelligent storage technology with Infinitix's AI-Stack platform, delivering an enterprise-grade AI training and inference solution that unifies hardware and software.
Phison said the collaboration uses high-speed SSDs and intelligent memory expansion to overcome the hardware constraints of traditional HBM and GDDR. By integrating aiDAPTIV+ with AI-Stack, enterprises can incorporate hardware acceleration into AI workload scheduling in Kubernetes-native environments, enabling end-to-end performance optimisation from model training to inference deployment.
Infinitix CEO WenYu Chen said AI has entered a phase of large-scale adoption driven by architectural and platform capabilities, where the priority is no longer raw compute power, but how efficiently that power is managed, scaled, and converted into business value.
The partnership brings storage-layer capabilities into AI infrastructure scheduling, allowing enterprises to integrate heterogeneous compute, memory, and storage resources in Kubernetes-native environments. This enables AI data centres to deploy large-scale model training and inference with more flexible and cost-efficient architectures, supporting scalable, enterprise-class AI platforms.
Phison CEO KS Pua said AI is rapidly shifting from single-GPU computing toward system-level architectures spanning multiple nodes and resources. With aiDAPTIV+, Phison incorporates the NAND storage layer into AI memory and compute architectures, redefining how AI systems scale. Through AI-Stack's native scheduling capabilities, NAND storage, memory, and compute resources can operate in coordination across enterprise environments.

Phison CEO KS Pua. Credit: Phison Blog
Built on a Kubernetes-native architecture, AI-Stack integrates GPU partitioning, aggregation, and cross-node computing, with full support for Nvidia and AMD GPUs. It enables unified management of conventional GPU servers and Phison's aiDAPTIV+ nodes on a single platform.
With multi-tenant access control, automated scheduling, centralised monitoring, and billing mechanisms, the platform reduces the complexity of AI infrastructure governance and operations. Enterprises can deploy large language model training and inference without fully investing in high-end HBM GPUs.
Phison said the two companies will continue to deepen cooperation across AI, intelligent storage, and cloud operations to support efficient, scalable data infrastructure.
Article edited by Jack Wu



