CONNECT WITH US
Wednesday 8 April 2026
DEEPX Speeds Physical AI Commercialization: 27 Orders In Seven Months
DEEPX, a Seoul-based fabless semiconductor company developing ultra-low-power AI inference chips for physical AI applications, has secured 27 commercial purchase orders across eight countries within seven months of starting mass production of its first-generation AI chip - a pace that industry observers describe as highly unusual for an emerging fabless company at such an early stage of commercialization
Thursday 16 April 2026
Unveiling ASUS liquid-cooling AI Infrastructure through building AI supercomputer Nano4 projects at NCHC
Amidst the reality of global oil prices continuously skyrocketing due to war, the global market is facing the formidable challenges posed by the turbulence impacting the global economy in 2026. Major Taiwanese electronics manufacturers and supply chains are pivoting to review comprehensive AI strategies. They are adopting a particularly aggressive stance in their strategic deployment of cloud server systems and AI infrastructure, aiming to achieve an operational breakthrough where the market outlook remains far from optimistic.ASUS unveils its complete Professional Services for Sovereign AI. Since the first project in 2018, which resulted in Taiwan's first AI supercomputer, Taiwania 2, ASUS has been actively developing sovereign AI infrastructure and providing customized AI solutions to clients. Delivering trusted AI with total flexibility, from rack-scale AI Factories, desktop AI supercomputing, Edge AI to Enterprise AI solutions deployment, the company is redefining Sovereign AI Solutions with reinforces the company's position as a global leader in AI-driven digital transformation.On April 1, 2026, ASUS joins the DIGITIMES online seminar titled "Leading the Way in Sovereign AI: A Look at Next-Generation AI Computing and Storage Architecture from Supercomputing Performance". This event starts with the construction and practical application of next-generation supercomputers at the National Center for High-Performance Computing (NCHC) of the National Applied Research Laboratories. This collaborative project shows ASUS Powering NCHC to building Taiwan's AI Supercomputer and the next generation of High Performance Computing (HPC) systems.As AI and high HPC workloads push compute density and power consumption beyond the capabilities of traditional air cooling, the flagship ASUS offerings are the ASUS AI POD systems redefining the new architecture of rack-scale powerhouse, storage architecture, and liquid cooling designed for massive AI workloads. This momentum will continue to maximize computing power density through the NVIDIA HGXTM and Blackwell platforms and provide insights into solving heat dissipation bottlenecks with advanced liquid cooling technology to meet the performance and energy efficiency requirements of high-computing environments.Practical experience in building a leading infrastructure in Taiwan High Performance ComputingThe first session features a keynote lecture by Nobel Hsia, ASUS Deputy Manager of Product Planning, titled "Leading the Sovereign AI Wave: From the National Center for High-performance Computing (NCHC) projects to ASUS's Forward-Looking Infrastructure and Storage Solutions." ASUS's Sovereign AI targets the business opportunities that countries are currently striving to play the vital role of innovation in securing data sovereignty and security with in-house data processing and computing capabilities, and promotes the transformation of scientific research in the industry. ASUS has partnered with NVIDIA to build a massive AI supercomputer system with an "ALL IN on AI" strategy. Through the maximum computing power density provided by the NVIDIA HGXTM and Blackwell platforms, ASUS has deployed complete portfolio ranging from rack-scale AI factories to edge and enterprise deployment. With its proven track record of successfully supplying AI infrastructure at the B300 and GB300 levels in the market, ASUS has become a trusted partner, providing complete end-to-end solutions.It is worth mentioning that ASUS has ten years of experience collaborating with the National Center for High-performance Computing on supercomputer projects. ASUS's sovereign AI expertise is proven by several successful national-level AI project deployments, including flagship supercomputing initiatives such as Taiwania 2 and Forerunner 1, cementing its leadership in high-performance computing and AI systems. Through ASUS Professional Services, the collaboration covers all aspects from design, deployment to operation.The latest Nano4 (Crystal 26) NVIDIA HGXTM H200 cluster AI server system was to build a next-generation AI supercomputer capable of handling complex large language models (LLMs), deep learning, and advanced HPC workloads. In this new project ASUS has built Taiwan's first AI supercomputer project based on NVIDIA GB200 NVL72 system architecture with direct-cooling technology. The engineering team has played an important role in the delivery of the servers, the planning of large-scale computing architecture, deployment and optimization, which has become the foundation for ASUS to expand its overseas AI deployment and demonstrate ASUS's ability to build computing power from national to international levels.Opening Agentic AI Frontier: ASUS Supports NVIDIA Vera Rubin Platform and InfrastructureIn response to the advent of the new-generation NVIDIA Vera Rubin platform and its accompanying infrastructure architecture, ASUS has developed the next-generation ASUS AI POD highlighting a proficiency in liquid-cooled AI solution. Spanning the new design from rack-scale AI factories, data center servers to desktop workstations and edge AI devices, this solution delivers end-to-end AI computing power and infrastructure, specifically targeting trillion-parameter models and million-token contexts, maximizing efficiency across power, memory, and compute.Hsia began by introducing the ASUS flagship XA VR721-E3 architecture. This system supports the NVIDIA Vera Rubin NVL72 platforms; it is purpose-built for large-scale AI model inference and training to deliver massive AI performance for large-scale AI factories. while also accommodating the specific workloads required for agentic AI. Designed as a 100% liquid-cooled, rack-scale system, it features a Thermal Design Power (TDP) reaching up to 227 kW, which is a capability that fully satisfies the demand for the immense performance required to compute AI models with trillions of parameters.Furthermore, addressing rigorous enterprise-grade data-center demands, ASUS has simultaneously launched the XA NR series. These product series support the NVIDIA HGXTM Rubin NVL8 architecture, featuring eight Rubin GPUs interconnected via the sixth-generation NVLink, which each GPU could deliver a maximum bandwidth of 800 GB/s. To facilitate a seamless and cost-effective transition to liquid cooling, ASUS offers two distinct solutions: the XA NR1I-E12L, an innovative hybrid-cooled option; and the XA NR1I-E12LR, a 100% liquid-cooled system.To support these powerful systems and democratize AI development, ASUS also has established a robust data ecosystem by partnering with NVIDIA-Certified storage providers. The storage solutions offers the technologies such as JBOD, DPDK, and Object Storage, thereby delivering scalable, resilient solutions for memory-intensive AI applications to support robust capabilities for the integrated storage and operational management.On the software front, ASUS provides a suite of one-stop platforms. Leveraging the ASUS Infrastructure Deployment Center (AIDC), ASUS automated the setup process including ACC (ASUS Control Center) and BMC, accelerating time-to-market for critical research resources. To address the full spectrum of requirements for system construction, deployment, and operations, ASUS provides expert consultation, a broad portfolio of tailor-made AI solution while simultaneously catering to the comprehensive lifecycle management needs of the entire data center.WEKA data storage platform is redefining AI storage economicsTo meet the elastic data storage requirements of AI workloads across diverse usage scenarios, ASUS's AI POD system incorporates a comprehensive storage solution featuring a high-speed, All-Flash NVMe SSD storage architecture. ASUS has collaborated with its ecosystem partners to develop a next-generation unified storage system with high-reliability storage servers and professional validation of network. WEKA, the AI storage company, is one of partners providing high-performance, software-defined storage for GPU-accelerated AI and HPC environments, combining low latency with unified data management.In the second presentation of the session is delivered by Ray Wu, Senior Consultant for the Asia-Pacific region at WEKA. His speech titled the theme "Ultimate Data Empowerment: How WEKA Helps NCHC Build an AI-Accelerated Computing Storage Architecture". To address NCHC's requirements for extreme performance and energy efficiency, Wu highlighted the unified data management solutions to provide rapid scalability, which emphasizing flexibility and intelligent adaptive capabilities, and effectively addressing the demands of a wide array of usage scenarios.Addressing the requirements of diverse application environments such as Kubernetes and Slurm in NCHC Nano4, WEKA serves as the high-performance storage foundation through its efficient storage platform. Based on the NVIDIA AI Data Platform reference architecture, the solution employs an end-to-end system to accelerate data processing performance for HPC applications. This assists the NCHC in optimizing the deployment of high-efficiency AI computing, dramatically reducing the time required for AI application deployment and development from months to mere minutes.This ASUS WEKA storage solutions demonstrates the NCHC's technical excellence in leveraging its ecosystem and integrated services to rapidly deliver the low-latency, scalable and high throughput required capabilities and swiftly support the next generation of agent-based AI applications. WEKA solutions empower enterprises to transition from experimentation to full-scale operations, rendering AI applications economically viable and maximizing performance across a wide spectrum of fields, ranging from next-generation AI agent systems to AI-enabled healthcare applications.In this event, ASUS showcases fully liquid-cooled AI infrastructure with the critical thermal management required for the next-generation NVIDIA Vera Rubin NVL72 system platform. By efficiently dissipating heat from high-performance CPUs, GPUs, and accelerator-dense racks, ASUS significantly reduces energy consumption while supporting unprecedented rack density to enable enterprises and cloud service providers to build high-performance, energy-efficient large-scale AI clusters with unmatched efficiency and dramatically reduced PUE and TCO.【White paper download】Empowering Scalable AI Reasoning with ASUS AI POD featuring【Webinar on-demand】Bridging NCHC Success to NVIDIA Vera Rubin Architecture 
Tuesday 14 April 2026
Cypress Technology's Professional AV Solutions Drive Digital Transformation in Courtrooms
Cypress Technology has been dedicated to the research, development, and manufacturing of professional audiovisual solutions for 36 years, with all R&D and production facilities based in Taiwan. In recent years, the company has actively engaged with various application environments, gaining deep insights through close collaboration with users and on-site evaluations. This approach has enabled the continuous development of integrated solutions that meet the specific needs of diverse scenarios.With the rapid evolution of audiovisual technologies, the HyLinX AV over IP solution which leverages standard network packet transmission has emerged as a key trend in Pro AV industry due to its flexible deployment, centralized management, and high scalability. Cypress Technology has successfully implemented this technology in multiple judicial institutions both in Taiwan and abroad, helping transform courtrooms into efficient and intelligent digital environments.During court proceedings, the ability to present evidence and critical information to judges and juries in a real-time, clear, and transparent manner is essential to ensuring a smooth trial. As a result, comprehensive AV equipment configuration is crucial. Modern courtrooms are typically equipped with high-resolution displays and document cameras to present various types of evidence and documents in real time. Through the professional AV integration solutions of Cypress Technology to enable rapid switching and synchronized display of multiple signal sources, ensuring clear information delivery and intuitive operation.In addition, Cypress Technology's judicial AV integration solution can transmit video and audio signals from identification rooms on different floors to the courtroom in real time. This supports remote interrogation applications, allowing witnesses or involved parties to complete appearance procedures without being physically present. Through high-quality video transmission and synchronized recording functions, the entire trial process can be fully preserved, enhancing the convenience and reliability of judicial operations.Furthermore, the system displays evidence and important information simultaneously on high-resolution equipment, allowing judges and juries to grasp key content in real time. By integrating remote video participation with evidence retention, the solution not only ensures information integrity but also enhances trial efficiency and overall workflow.Beyond judicial applications, Cypress Technology's professional AV solutions are widely utilized in high-performance corporate meeting rooms, command and control centers, smart manufacturing situation rooms, medical imaging transmission, as well as education and public display environments. Moving forward, Cypress Technology will continue to invest in the research and development of high-efficiency audiovisual technologies while integrating sustainability into its product design and manufacturing processes. Cypress Technology remains committed to providing stable and reliable solutions that enable smarter and more efficient audiovisual workflows across diverse applications.Evidence Presentation in Digital Courtroom. Credit: Cypress Technology
Friday 10 April 2026
STMicroelectronics expands 800 VDC AI datacenter power portfolio with NVIDIA
STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, today announced the expansion of its 800 VDC power conversion portfolio with two new advanced architectures: 800 VDC to 12V and 800 VDC to 6V. Developed according to the NVIDIA 800 VDC reference design, these new power conversion stages complement the previously introduced 800 VDC to 50V solution. The rapidly emerging 800 VDC data center architecture enables higher energy efficiency, reduces power losses, and supports more scalable, high compute density, infrastructure for hyperscalers and AI compute."As AI infrastructure compute scale continues to expand fast, it requires higher voltage distribution and greater density, which can only be achieved with system-level innovation for each of the different AI server form factors," said Marco Cassis, President, Analog, Power & Discrete, MEMS and Sensors Group Head of STMicroelectronics' Strategy, System Research and Applications, Innovation Office at STMicroelectronics. "With these new converters for 800 VDC power distribution, ST brings a complete set of solutions to support the deployment of gigawatt-scale compute infrastructure with more efficient, scalable, and sustainable power architectures."A complete 800 VDC ecosystem for the different AI server form factorsThe expansion to 12V and 6V output stages reflects the industry move toward different server architectures requiring different power delivery topologies depending on GPU generation, server height, form factor, and thermal envelope for large-scale training clusters, inference farms, and high-density AI infrastructures. The 50V, 12V, and 6V intermediate DC buses will all coexist in AI data centers depending on rack density, GPU configuration, and cooling strategy.The new 800 VDC to 12V converter enables high-efficiency distribution from rack-level power shelves directly to the voltage domains that feed advanced AI accelerators.The new 800 VDC to 6V path allows OEMs to reduce the number of conversion stages and move the 6V bus closer to the GPU. This reduces copper usage, minimizes resistive losses, and improves transient performance, a critical differentiator for large-scale training clusters.Back in October 2025, STMicroelectronics introduced a fully integrated prototype power?delivery system showcasing a compact GaN?based LLC converter operating directly from 800 V at 1 MHz with over 98% efficiency and exceptional power density in a smartphone?sized footprint exceeding 2,600 W/in³ at 50 V. The three solutions combine ST technologies across power semiconductors (silicon, SiC, GaN), analog and mixed-signal, and microcontrollers. Technical highlights of the new 12V and 6V architecturesDirect 800 VDC to 12V high-efficiency conversion:Eliminates the traditional 54V intermediate stage, reducing conversion steps and system-level losses. Enables higher rack-level efficiency, lower copper usage, and simplified integration for future GPU generations. Includes newly developed high-density power delivery board (PDB) achieving efficiency targets exceeding the sum of previous two-stage conversion paths.800 VDC to 6V architecture for GPU-nearing conversion:Is designed for system builders who require power stages closer to the GPU, minimizing IR drop and improving response under fast load transients.Completes the topology portfolio for servers with ultra dense GPU configurations.Additional technical information at the site.