PEGATRON, a globally recognized Design, Manufacturing, and Service (DMS) provider, is pleased to announce its participation in COMPUTEX 2025, where it will unveil a comprehensive portfolio of advanced rack-scale solutions designed to meet the increasing complexity and scale of AI and data center workloads. These solutions deliver exceptional compute density, energy efficiency, and scalability, aligned with open infrastructure standards.A major highlight of PEGATRON's COMPUTEX showcase is the introduction of the RA4802-72N2, a rack solution featuring the NVIDIA GB300 NVL72, which includes 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. This system delivers up to a 50X increase in AI factory output with optimized inference capabilities. The rack integrates PEGATRON's in-house developed Coolant Distribution Unit (CDU) to enhance cooling efficiency in high-density environments. Equipped with redundant hot-swappable pumps and a cooling capacity of 310 kW, it ensures optimal performance and high reliability for mission-critical data center operations.Also debuting is the PEGATRON AS208-2A1, a 2U liquid-cooled server system accelerated by the NVIDIA HGX B300 system and dual AMD EPYC™ 9005 processors. Scaled to 48U NVIDIA MGX rack solution, it supports 128 GPUs and 32 CPUs within a high-efficiency, direct liquid cooling framework. This platform delivers exceptional compute density and thermal control while enabling efficient GPU utilization across the rack. Designed for the AI reasoning era with increased compute and expanded memory capacity, it offers breakthrough performance for complex workloads from agentic systems and reasoning to video generation, making it ideal for every data center.Additionally, PEGATRON will be offering NVIDIA RTX PRO 6000 Blackwell servers (AS400-2A1, AS205-2T1), which provide nearly universal acceleration for a broad range of enterprise AI workloads, from multimodal AI inference and physical AI to design, scientific computing, graphics and video applications.Pushing the boundaries of rack-scale compute performance even further, PEGATRON unveils AS501-4A1, the 5OU system features the latest AMD Instinct™ MI350 series GPUs and AMD EPYC™ 9005 processors. Scaled up to 51OU liquid-cooled rack solution, it integrates configuration of 128 AMD Instinct™ MI350 series GPUs and platforms. The solution employs direct-to-chip liquid cooling across both GPUs and CPUs, enabling sustained performance for generative AI, inference, training, and high performance computing—all within a compact, energy-optimized footprint."With the increasing scale and complexity of AI workloads, data center infrastructure must evolve to deliver higher performance, better efficiency, and thermal resilience," said Dr. James Shue, SVP & CTO of PEGATRON. "Our latest liquid-cooled solutions reflect our commitment to enabling the next wave of AI innovation through scalable, ultra high-density systems optimized for real-world deployment."PEGATRON welcomes attendees to Booth #L0118, 4th Floor, Nangang Exhibition Center, Hall 1, from May 20–23, 2025, to explore its newest platforms and engage with the experts behind PEGATRON's breakthrough compute and cooling technologies.PEGATRON Liquid-Cooled Ultra High Density GPU RackPhoto: Company
Retronix Technologies Inc. announced the launch of two cutting-edge AI edge computing platforms, developed in collaboration with Renesas Electronics Corporation.The newly unveiled Sparrow Hawk Single Board Computer (SBC) and Raptor System on Module (SoM) are both powered by the latest Renesas R-Car V4H System-on-Chip (SoC), delivering up to 30 TOPS (Dense) of AI inference performance. These open platforms are designed to support a wide range of embedded edge AI applications and smart automotive solutions.Sparrow Hawk focuses on robotics, industrial automation, and rapid prototyping, offering a highly flexible and cost-effective development platform. Raptor, with its modular design and multi-camera processing capabilities, is engineered for commercial vehicles, advanced driver-assistance systems (ADAS), and autonomous guided vehicles (AGVs), meeting demanding requirements for reliability and AI edge computing.Product Highlight 1: Sparrow Hawk — A Versatile Platform for Edge AI ApplicationsSparrow Hawk is a compact and highly expandable edge AI development board featuring the Renesas R-Car V4H SoC. It offers up to 30 TOPS of dense AI inference performance and supports a fully open-source Linux environment, accelerating the development of industrial and embedded AI solutions.Key Features:*Optimized for Edge Intelligence: Designed for industrial robots, smart manufacturing, and autonomous control systems. *Raspberry Pi HAT Compatible: Easily integrates with popular modules and sensors to streamline development.*High AI Performance: Handles real-time image processing and AI workloads with ease thanks to 30 TOPS deep learning capabilities.*Open Development Environment: Built on an open-source Linux architecture with extensive community support.*Developer-Friendly Pricing: Campaign program at only USD 300, no paper contract required to get started.*Compact Design: Measures just 146mm x 90mm, ideal for embedded and terminal devices.Rich I/O and Expansion Interfaces:*8GB / 16GB LPDDR5 memory*Dual-camera interface and 40-pin GPIO header*1x DisplayPort, PCIe (4x USB3.0, 1x M.2 Key-M), 2x CAN-FD, Audio (2x In. 1x Out) and AVB Ethernet*Supports USB PD 20V power input and MicroSD removable storageRetronix Sparrow HawkPhoto: CompanyProduct Highlight 2: Raptor — Automotive-Grade AI SoM for Smart Vehicle Vision ProcessingRaptor is a high-performance SoM designed for automotive vision processing and edge AI computation. Powered by the Renesas R-Car V4H SoC, it supports multiple camera inputs, pre-processing, and AI inference. Raptor is ideal for applications including ADAS, smart cockpits, surround-view systems, and AGVs.Key Features: *Automotive-Grade Architecture: Built with safety-oriented design principles, long-term supply, and compliance with automotive standards.*Multi-Camera Support: Integrated ISP supports up to 8 video channels with synchronized vision processing.*Powerful AI and Specialized Automotive IP: Delivers 30 TOPS AI inference performance with integrated Image Rendering Unit, Dense Optical Flow, Structure from Motion, and CV/Deep Learning Engines.*Reference Carrier Design & Custom Development: Retronix offers reference designs and engineering services to accelerate product development. *Comprehensive Software Resources: Compatible with Yocto Linux and includes the Renesas AI Hybrid Compiler toolkit.*High Reliability: Designed for high-temperature environments with optimized power efficiency for automotive use.Retronix RaptorPhoto: CompanyAvailability and Computex ShowcaseSparrow Hawk and Raptor are scheduled to sample in late Q2 2025, alongside the launch of a developer program and open-source community support platform to help users rapidly prototype and deploy AI applications.We warmly invite industry professionals to visit Retronix at Computex 2025 (Booth N0814 / B-4) to experience the capabilities of Sparrow Hawk and Raptor firsthand. Explore their architectures, image processing performance, and AI inference efficiency across smart manufacturing, robotics, unmanned vehicles, and intelligent automotive applications.
At the 2025 COMPUTEX Product Showcase, JMicron Technology Corp., a global leader in high-speed interface bridge controllers, alongside its wholly-owned subsidiary KaiKuTeK Inc., introduced a new line of ultra-fast storage bridge controller solutions. These advancements enable next-generation enclosure types and open the door to a wide range of new applications in data storage. In addition to the storage innovations, the companies also unveiled their latest breakthrough in smart sensing: a 60GHz millimeter-wave radar-based AI sensing technology, bringing exciting news for the smart home experience.JMicron demonstrated its latest high-speed bridge controller, highlighting the JMS591 and the JMB595 solution. The JMS591 (USB 3.2 Gen2 x2 & eSATA 6Gb/s to 5 ports SATA 6Gb/s), a single-chip multi-bay hardware RAID solution which supports RAID 0/1/5/10/JBOD, demonstrated that sequential read/write performance can reach 2,000 MB/s, and it can also control computer fans and a liquid crystal display module (LCM). Compared to current solutions, the JMS591 upgrades data transfer speeds, improving the stability and effectiveness of hardware RAID functions. Moreover, it is expected that the JMS591 will be adopted widely across multi-bay application such as network-attached storage (NAS), direct-attached storage (DAS), network video recorder (NVR) and digital video recorder (DVR) markets by providing a high cost-effective multi-bay RAID storage solution, while the market will continue to keep an eye on the JMS591. On the other hand, the JMB595 (PCIe Gen4x4 to 16 ports SATA 6Gb/s), a multi-bay storage solution prototype, is not only suitable for high-end surveillance and private cloud applications, but also serves as another option in the entry-level server market. Hence, the industry shows high expectations on the JMB595."Through our accumulation of technical expertise and position as a market leader, we are creating a high-speed data transfer and storage application trend, collaborating with our key clients to develop the next-generation bridge controllers," said Tony Lin, JMicron's VP of Marketing & Sales Center.KaiKuTeK unveiled its latest 60GHz mmWave radar AI sensing technology, which integrates a proprietary antenna design, advanced DSP and AI accelerators, and self-developed algorithms. This innovation brings precise target behavior tracking and positioning recognition. The breakthrough effectively addresses long-standing challenges in traditional smart home products related to human presence detection. For instance, smart electronic locks can detect an approaching person via mmWave radar and automatically activate facial recognition or other unlocking modes. Fans and air conditioners can detect user locations to adjust airflow dynamically, creating a "wind follows the person" effect or enabling personalized temperature control. Meanwhile, TVs can optimize sound staging based on viewer positioning, delivering an immersive experience. This innovation not only enables electronic devices such as electronic locks, fans, air conditioners, and TVs to interact with users more intelligently, but the streamlined design significantly reduces the Total Cost of Ownership (TCO) and delivers simultaneous benefits of energy conservation and carbon reduction, setting a new standard in the consumer electronics market."Our long-term focus is on integrating mmWave radar with DSP and AI to create more intuitive and intelligent human-machine interfaces," said Mike Wang, CEO of KaiKuTeK. "The adoption of 60GHz mmWave radar represents a breakthrough, not only solving smart home detection challenges but also introducing unprecedented convenience for users. We look forward to expanding this technology into industrial and IoT applications."With its leading expertise in DSP/ AI/ ML technologies and antenna design, KaiKuTeK continues to demonstrate its strong potential for technological innovation. The future of mmWave radar applications seems promising, a trend driven by rising demand for contactless technologies and intelligent automation. In response, KaiKuTeK is actively partnering with global technology leaders to fast-track commercialization efforts. The company plans to introduce a new wave of consumer products featuring this advanced radar technology in the second half of 2025. This innovation opens new growth opportunities across industries, setting the stage for the next generation of smart environments.We sincerely invite you to visit JMicron and KaiKuTeK at Courtyard by Marriott Taipei #Sea Hall during COMPUTEX.JMS591 multi-bay hardware RAID solution
We've crossed a threshold. AI used to be about research papers and new models making ever-higher benchmark scores, but now it's pushing a new gold rush of innovation. AI agents solving real-world problems is the new opportunity for solo entrepreneurs to revolutionize industries. This monumental shift sees systems being built today powering real applications and services that are entering the hands of users and changing business operations around the world.This transformation isn't limited to large companies with deep pockets anymore. A solo developer with a vision and the right tools can now create an AI-driven app and bring it to market. The barriers to entry have been lowered and the door to innovation is wide open."This year can truly be considered the inaugural year of artificial intelligence applications," said Alex Yeh, Founder and CEO of GMI Cloud, following his visit to NVIDIA's GTC 2025 in San Jose. What once felt like long-term speculation is now unfolding rapidly now that real use-cases are being served with a surge of AI-native products from solo developers and startups.At the heart of this momentum is the rise of AI agents: software systems that can perceive, reason, plan, and take autonomous action. They're powering everything from intelligent customer support tools to domain-specific solutions like personalized fashion search engines that not only identify styles but also suggest looks and purchasing options in real-time.AI agents are distinct from traditional software in that they possess a level of autonomous decision-making that allows them to learn from interactions and adapt in real time. This makes them more dynamic, responsive, and capable of handling complex tasks with minimal human oversight, paving the way for smarter, more personalized user experiences.Fueling this shift is a convergence of trends: powerful open-source LLMs like DeepSeek and LLaMA4, a growing emphasis on inference, and a robust ecosystem of modular, composable AI tools. Together, these advances allow small teams—or even individuals—to build sophisticated AI agents at an unprecedented speed.But this accessibility depends on infrastructure that can keep pace. High-performance GPUs, flexible environments, and tightly integrated tools are necessary for developing good AI solutions. Building from scratch is expensive and risky, especially with the current pace of development. That's why platforms that provide a fully integrated AI development stack are becoming essential accelerators, enabling innovators to focus on their ideas without worrying about the infrastructure.Companies like GMI Cloud have emerged as key enablers in this landscape. With four data centers in Taiwan and the U.S., access to over a thousand NVIDIA H100 and H200 GPUs, and a nearly 50-member technical team, GMI Cloud has built an AI application development platform that streamlines the entire lifecycle—from training and fine-tuning to inference and deployment.By integrating computing resources with popular open-source tools, GMI Cloud gives developers and enterprises a unified environment that dramatically shortens the path from prototype to product. Users can deploy AI applications using a simple API interface and scale resources in real-time through flexible subscription or pay-as-you-go pricing models.This flexibility extends to deployment environments as well—cloud, on-prem, or hybrid—depending on client needs. That makes it easier for businesses to maintain data security while still taking advantage of GPU acceleration.The Era of Solo Entrepreneurs Is Here—Industrial Sectors to Lead in AI Robot Adoption"In the age of AI Agents, we're on the verge of seeing explosive growth in solo entrepreneurship," said Alex Yeh. There's a growing need for AI startups for accessible conditions to fuel good AI development. In the past, accessing the infrastructure and resources needed for AI development was often a costly and complicated process. Developers had to invest heavily in high-performance hardware, navigate complex software environments, and deal with long deployment cycles. Now, with neoclouds like GMI Cloud, users can simply create an account, pay, and book a time slot to access training resources. Pricing is available via subscription or pay-as-you-go models, giving users the flexibility to scale computing resources in real time according to demand.As AI agents continue to evolve, solo developers are empowered to create intelligent, scalable products that can disrupt industries. Take, for example, a solo developer who used open-source LLMs to create an AI-powered personal finance assistant. With minimal initial investment, this product is now helping thousands of users optimize their financial decisions. These are the kinds of innovations that AI agents unlock, enabling anyone to build impactful solutions.This year's Computex will revolve around the theme "AI Next," highlighting three major areas: "Smart Computing & Robotics," "Next-Gen Technologies," and "Future Mobility." Alex Yeh believes the logical next step in this AI Agent era is the deployment of intelligent robots across real-world environments, with industrial applications being the most promising. GMI Cloud will showcase its powerful AI capabilities at Computex, demonstrating how its unique business model addresses the global shortage of GPUs for AI development. At the same time, the company continues to fulfill its mission: "Build AI Without Limits."Alex Yeh points out that 2025 marks the beginning of the AI application era. With its powerful GPU infrastructure, GMI Cloud aims to empower the rise of solo entrepreneurs in the age of AI Agents.Photo: DIGITIMES
With 2025 Computex Taipei focusing on the three major themes of "AI & Robotics", "Next-Gen Tech" and "Future Mobility", global technology giants have gathered to display their AI technology prowess, focusing on the core concept of "AI Next". The rapid deployment of AI applications has also accelerated urgent demand for high-efficiency storage technologies across various application scenarios. As the global leader of NAND Flash controllers, Silicon Motion plays a key role in AI ecosystem development. Meeting Diverse Storage Requirements, from Low Latency and Power Efficiency to High Data Throughput, to Support Edge AI Growth"The emergence of DeepSeek has greatly lowered the threshold for AI applications," pointed out Mr. Kou, President and CEO of Silicon Motion. As an open source technology, DeepSeek has been able to reduce the cost of language model training. It has gradually subverted the industry's traditional views on AI and led to the accelerating popularization of edge applications. He emphasizes that a wave of AI adoption has already begun for devices from smartphones and laptops to wearable devices, and that storage technologies are crucial in supporting this revolution.In his analysis of AI storage architecture, Mr. Kou remarked that the storage system requirements for each stage of the implementation process differ when implementing AI applications in various scenarios, from initial data ingestion to the preparation, training, and inference stages. For instance, data ingestion requires import of a large amount of data, meaning that high write throughput is required. On the other hand, low latency performance and support for a wide variety of I/O sizes has greater importance in the model training stage. Although these requirements vary, the overall architecture must still possess five core characteristics: high data throughput, low latency, low power, scalability, and high reliability, in order to meet the needs of AI applications.In response to the massive data demands of AI applications, Silicon Motion leads innovation in storage technologies by upgrading NAND controller technology. Mr. Kou said that data application processes can be effectively optimized through hierarchical management and smart identification mechanisms. Flexible data placement (FDP) technology can also serve to improve efficiency and durability, while also offering the advantages of being low latency and low cost. For data security and reliability, the product also adopts advanced encryption standards and a tamper-proof hardware design. In combination with end-to-end data path protection mechanisms and Silicon Motion's proprietary NANDXtend™ technology, this enhances data integrity and prolongs the SSD's lifespan. In addition, Silicon Motion supports 2Tb QLC NAND and 6/8-Plane NAND, combining smart power management controllers (PMC) with advanced process technology to effectively reduce energy consumption while improving storage density.Not only that, it can also be paired with Silicon Motion's unique PerformaShape technology, which utilizes a multi-stage architecture algorithm to help optimize SSD performance based on user-defined QoS sets. Together, FDP and PerformaShape can not only help users effectively manage data and reduce latency, but also significantly improve overall performance by approximately 20-30%. These technologies are specifically suited for AI data pipelines in multi-tenant environments, including key stages such as the data ingestion, data preparation, model training, and inference processes.Creating Comprehensive Solutions to Realize Customer AI Applications Across Cloud and Edge ComputingIn response to data center and cloud storage needs, Silicon Motion has launched the world's first 128TB QLC PCIe Gen5 enterprise SSD reference design kit. By adopting the MonTitan SSD development platform, which comes equipped with an SM8366 controller, it is able to support PCIe Gen5 x4, NVMe 2.0, and OCP 2.5 standards. With a continuous read speed of over 14 Gb/s and a random access performance of over 3.3 million IOPS, it boasts a performance improvement of over 25%. This design is able to speed up training of large language models (LLM) and graph neural networks (GNN) while also reducing AI GPU energy consumption, allowing it to meet high-speed data processing demands.For edge storage solutions, Mr. Kou stated that the number of edge devices with AI capabilities will grow rapidly. He forecast: "The AI humanoid robot market will see explosive growth in the next 5 to 10 years." Systems at different levels have different storage requirements. For example, at the sensor level, data needs to be processed and filtered in real time to ensure accurate data sensing, while decision-making relies on multi-modal fusion reasoning, which entails more demanding storage performance and data integration capabilities. Meanwhile, at the execution level, various calibration parameters must be stored to enable the robot to act and think more similarly to humans. In response, Silicon Motion has actively deployed NVMe SSD, UFS, eMMC, and BGA SSD storage solutions, and values greater cross-industry collaboration to build a shared eco-system, in order to promote the further evolution of smart terminal storage technologies.Additionally, Silicon Motion has launched a variety of high-efficiency, low-power controller to meet the AI application needs of edge devices: The SM2508 PCIe Gen5 controller is designed for AI laptops and gaming consoles, featuring up to 50% lower power consumption compared to similar products. The SM2324 supports USB 4.0 high-speed portable storage devices up to 16TB in size. The SM2756 UFS 4.1 controller has a 65% higher power efficiency compared to UFS 3.1, providing an excellent storage experience for AI smartphones. In response to the urgent need for high-speed and high-capacity storage required for self-driving cars, Silicon Motion has also joined hands with global NAND manufacturers and module makers to jointly create storage solutions for smart automobiles."Storage technology undoubtedly acts as a core link in the AI ecosystem," emphasized Mr. Kou. Taiwan has a complete and highly integrated semiconductor and information and communications industry chain. It is capable not only of building AI servers, but also possesses great potential for promoting the development of AI applications. He believes that more practical AI edge computing devices and groundbreaking applications will be launched at a rapid pace in the future, and that storage solutions will face increasingly demanding requirements due to challenges in processing massive amounts of data. Silicon Motion will continue to use technological innovation as a driving force to actively support AI development.Mr. Kou expressed that the fast-paced development of generative AI has led to lower barriers to adoption for related applications. Silicon Motion aims to satisfy the market's needs through offering a diverse range of high-efficiency, low-power storage solutions.Photo: Silicon Motion Technology
Montage Technology is a global fabless semiconductor design company specializing in data processing and interconnect chip solutions. Founded in 2004, the company is headquartered in Shanghai, with operations and partnerships spanning key international markets. The company focuses on DRAM memory modules and data processing solutions for cloud computing and data center markets, addressing the soaring demand for high-bandwidth and large-capacity memory driven by artificial intelligence (AI) and enterprise workloads. Its product portfolio includes Memory Interface chips, Memory Module Controller ICs, PCIe Retimer chips, and more.Driven by the increasing demand for generative AI, machine learning, big data analytics, datacenter construction, one of the recent product development trends of Montage Technology is aiming the Compute Express Link® (CXL®) technology, which enables high-speed interconnect between CPUs and DRAM memory, addressing memory challenges by enhancing system performance, scalability, and cost-efficiency. Built on the PCIe physical layer, CXL introduces innovative capabilities such as DRAM memory expansion, sharing, pooling, and dynamic configuration, effectively eliminating traditional data processing bottlenecks in data-intensive systems and data center servers.In this interview, Geof Findley, World Wide VP of Sales & Business Development at Montage Technology, discusses the recent advancements in memory technologies aimed at increasing datacenter performance. Montage delivers versatile memory solutions that unlock next-gen memory bandwidth and performance, specifically tailored for AI and data-intensive workloads. These solutions are already in use by major global DRAM manufacturers and electronics OEMs/ODMs, underscoring strong partnerships with companies such as SK hynix, Samsung Electronics and Micron Technology.Credit: Montage TechnologySpecialty memory buffers and MRDIMM modulesWith more than 20 years of experience in memory products, the company maintains stable profits and proven track records. According to Findley, there are three key product lines driving growth for Montage, the first being specialty DRAM buffers. Montage started to develop its DRAM module buffer technologies very early. Given the buffer chip's critical role between the processor and DRAM memory, Montage collaborates with major CPU giants such as Intel and AMD, leading silicon IP vendors, as well as the world's top three memory manufacturers -- Micron, Samsung and SK hynix.Currently, Montage attributes its strong growth to the rapidly increasing demand for DDR5 memory used in data centers. Shipments of its DDR5 Registering Clock Driver (RCD) chips have grown substantially. Its 4th-gen DDR5 RCD chips deliver data rates of up to 7200 MT/s, a 50% increase over the 1st-gen products. Alongside its RCD portfolio, the company also provides DDR5 Data Buffers (DB) and other essential DDR5 module supporting chips like SPD EEPROM with Hubs, Temperature Sensors, and Power Management ICs.Considering the high throughput and low latency data processing use cases, MRDIMM (Multiplexed Rank Dual In-Line Memory Modules) are particularly useful to handle larger datasets such as large-scale databases, virtualization, and real-time analytic operations. In January 2025, Montage has successfully sampled its Gen2 Multiplexed Rank Registering Clock Driver (MRCD) and Multiplexed Rank Data Buffer (MDB) chipset to global leading memory manufacturers in South Korea, Japan and North America. These series IC solutions ensure interoperability and system-level performance for customers seeking to leverage the DDR5 MRDIMM Gen2 standard in high-throughput data-intensive applications. Montage's MRCD and MDB chips are fundamental to MRDIMM operationCredit: Montage TechnologyCXL adoption in data centers has moved from crawling to toddlingThe second product focus for Montage is CXL memory. CXL memory expansion is particularly valuable in scenarios where high DRAM capacity is required with only one DIMM per channel, or when servers lack available DIMM slots. Montage delivers the CXL Memory eXpander Controller (MXC) chips supporting the CXL 1.0, 2.0 and 3.1 specifications. These MXC chips comply with JEDEC specifications for both DDR4 and DDR5 memory. The mass-produced MXC Gen1 chips support CXL 2.0 with DDR4-3200/DDR5-5600, while the MXC Gen2 chips support CXL 2.0 and are compatible with DDR5-6400 memory.The development of CXL controllers is closely tied to PCIe interface technology. Current CXL 1.0 and 2.0 specifications primarily align with PCIe 5.0, while future CXL 3.x specifications are expected to align with PCIe 6.x to support even higher-speed memory channels. As memory pooling becomes more ambitious and widely adopted, the industry anticipates a surge in CXL deployment and volume scaling starting in 2026.The MXC product line is designed for use in Add-in Cards (AICs), backplanes or EDSFF memory modules to enable significant scalability in both memory capacity and bandwidth. Montage's MXC controllers are currently deployed by the world's top 3 memory manufacturers in E3.S form factor CXL memory modules. In parallel, Montage has launched several new product development projects in collaboration with Taiwan OEM/ODM partners. One such project involves working with a Taiwanese memory module manufacturer to develop CXL expansion card solutions. Another involves co-designing CXL memory AICs with major Taiwanese motherboard and server manufacturers, targeting OEM/ODM opportunities with global cloud data center providers.Credit: Montage TechnologyAs the second largest PCIe 5.0 and PCIe 4.0 Retimer supplierThe third product line for Montage is its Retimer chips, originally designed to enhance connectivity performance between GPUs, AI accelerators, CPUs, and other components within server systems. Retimer chips regenerate high-speed digital signals to extend reach and improve signal integrity in high-speed data processing systems. Currently, a typical AI server – often equipped with 8 GPUs -- requires 8 or even 16 PCIe 5.0 Retimer chips.Montage started delivering its PCIe 5.0/CXL 2.0 Retimer chips in January 2023, putting massive effort to extensive interoperability testing with a variety of compute, storage and networking components, such as CPUs, PCIe switches, SSDs, GPUs and NICs. As a result, Montage is now the second-largest PCIe 5.0 and PCIe 4.0 Retimer supplier globally and is dominating the market in China.Montage's Retimer chips are integrated into a variety of systems, including AI accelerator baseboards, server motherboards and riser cards. Montage is now providing the customer samples of its 16-lane PCIe 6.x/CXL 3.x Retimer chips as part of its new product roadmap.Activities in COMPUTEX Taipei 2025Montage has a global team of over 700 employees. During the trade show season and COMPUTEX Taipei 2025, Montage and its Taiwan partners will showcase a series of silicon products in a hotel suite showroom at the Place Taipei Hotel in the Nangang district. Findley describes this initiative as a strategic marketing campaign focused on highlighting the company's latest products -- including Retimer chips -- and introducing Montage brand to attract new customers from Taiwan's electronics supply chain and server manufacturing sector. In addition, the company will host advanced product training sessions to capture new business opportunities.AI applications are rapidly increasing in computational demands, doubling every few months, and now represent the primary driver of the PCIe roadmap–making PCIe Gen 6 a key requirement for the next generation of data centers. Meanwhile, CXL technology is reshaping the industry with its innovative memory architecture. Montage looks forward to working with Taiwan-based electronics OEMs/ODMs, server manufacturers and ecosystem partners to unlock the transformative potential of CXL technology and drive future success.
As Computex 2025 draws global attention to Taipei, IEI Integration Corp., in collaboration with QNAP Systems, Inc., will host its annual technology showcase, IEI Insight Days, from May 21 to 23 at POPOP Taipei. Designed for industry professionals, this three-day event focuses on actionable solutions in Industrial AI, resilient network infrastructure, and smart healthcare applications—bringing together real-world use cases, expert insights, and emerging technologies that address the evolving needs of edge computing and system integration.Hosted alongside Computex, this exclusive event by IEI and QNAP invites industry professionals to explore edge innovations in a relaxed and focused environment at POPOP Taipei.A Dedicated Experience Beyond the Show FloorLocated just one MRT stop from the Computex exhibition hall, POPOP Taipei offers a refreshing alternative to traditional trade show venues. Combining historical charm with modern design, the event space provides an ideal setting for relaxed yet focused dialogue. Whether you're exploring partnership opportunities, seeking insights on deployment strategies, or simply taking a break from the busy show floor, IEI Insight Days offers a curated environment to connect, learn, and exchange ideas.Event Highlights🔹 Focus Areas: Edge AI / Network Integration / AI-powered Healthcare 🔹 Showcase Solutions: • Redundancy-enabled and Recovery-Ready Edge Platform • Enterprise-grade networking infrastructure • AI healthcare computing with real-time image processing and voice command capabilities 🔹 Networking Space: Open demo zones and seating areas designed for spontaneous technical conversations and business engagementIEI Insight Days is more than a product showcase — it's a hub for collaboration and conversation. We warmly welcome Computex attendees, industry partners, and decision-makers to stop by and engage with us. Dive into the latest trends in edge computing, network infrastructure, and medical AI.
In an era demanding ever-smaller, more efficient, and higher-performing light sources, particularly in the short-wave infrared (SWIR) spectrum crucial for telecom and advanced sensing, a quiet revolution is underway in integrated photonics. CoreOptics is at the forefront, pioneering a new generation of surface-emitting lasers at 1310nm that promise to overcome the limitations of traditional edge-emitting counterparts.For years, integrating lasers onto photonic integrated circuits (PICs) has been a complex dance of alignment and interconnection. Now, CoreOptics is changing the steps with its innovative SWIR high-power Continuous Wave (CW) surface-emitting laser. The secret? Its inherent surface emission capability, tailor-made for direct flip-chip integration. This seemingly simple shift unlocks a cascade of benefits, streamlining manufacturing and paving the way for unprecedented integration efficiency.Beyond the Edge: The Power of Surface EmissionThe limitations of conventional edge-emitting lasers in integrated systems are becoming increasingly apparent. CoreOptics is embracing a different paradigm: Horizontal Cavity Surface Emitting Lasers (HCSELs). Imagine an edge-emitting laser cleverly engineered with a turning mirror, redirecting its light output perpendicular to the semiconductor wafer's surface. This ingenious design allows for packaging reminiscent of familiar Vertical Cavity Surface Emitting Lasers (VCSELs) or LEDs, but with distinct advantages.HCSELs offer a compelling alternative, most notably enabling wafer-scale testing. This fundamental shift in manufacturing allows for rigorous quality control at an early stage, translating to lower production costs and significantly higher throughput.But the benefits don't stop there. Surface emission inherently delivers:• Superior Grating Coupling: Resulting in significantly better grating coupling efficiency into the PIC.• Ultra-Compact Footprint: A crucial factor in the relentless drive towards miniaturization of integrated systems, allowing for denser and more powerful PICs.• Untapped Potential: Opening doors to the creation of higher power and more complex integrated light sources, pushing the boundaries of what's possible on a chip.CoreOptics' Innovation: SWIR PowerhouseAt the heart of CoreOptics' advancement lies its novel 1310nm (FP/DFB)high-power CW surface-emitting laser design. Leveraging advanced InP-based alloys and a meticulously optimized layered epitaxial structure, this laser is engineered for peak performance in the SWIR region.Key to its integration prowess are its innovative design features, including:• AR/HR Wafer-Level Coating: Enhancing optical performance and simplifying the manufacturing process.• Etched Facet (Mirror) and 45-Degree Reflector: Precisely engineered for efficient light extraction and surface emission. These features streamline production and contribute directly to the laser's superior performance and seamless integration capabilities.Flip the Chip, Transform the IntegrationThe true magic unfolds with direct flip-chip integration onto PICs. This technique involves attaching the laser die directly onto the PIC substrate, "face down." It's a seemingly small change with profound implications.By embracing flip-chip bonding, CoreOptics is effectively bypassing the complexities of traditional wire bonding and intricate packaging steps, leading to a new era of integration efficiency characterized by:• Lightning-Fast Electrical Paths: Shorter pathways translate directly to higher speed operation and pristine signal integrity – critical for high-bandwidth applications.• Unprecedented Compactness: The elimination of bulky interconnects results in a significantly smaller overall footprint for the integrated laser-PIC module, paving the way for truly miniaturized devices.• Pinpoint Optical Alignment: The direct and meticulously controlled bonding process ensures precise and stable optical grating coupling between the laser and the PIC waveguide, maximizing efficiency and reliability.SWIR HCSELs: SWIR Horizontal Cavity Surface Emitting Lasers Credit: CoreOpticsSimplifying the package: Less is moreThe elegance of flip-chip integration extends to packaging. By directly bonding the laser, CoreOptics is drastically reducing the need for intermediate components and connections for both electrical and optical interfacing. This simplification translates to:• Fewer Interconnects, Higher Reliability: Eliminating potential points of failure leads to more robust and dependable integrated devices.• Streamlined Assembly: The direct bonding process simplifies the manufacturing flow, potentially lowering assembly costs and boosting overall reliability.• Smaller and Lighter Packages: A crucial advantage for applications where size and weight are at a premium, such as portable sensing devices and wearable technology.A Spectrum of Possibilities: Applications UnleashedThe unique advantages of CoreOptics' integrated SWIR HCSEL laser solution are poised to revolutionize a wide array of applications, including:• High-Speed Optical Communication: Enabling next-generation on-chip interconnects and ultra-compact silicon photonics transceivers.• Advanced Sensing: Powering integrated LiDAR systems for autonomous vehicles and robotics, as well as compact gas sensing platforms.• The Smart Revolution: Enabling under-display proximity sensing in smartphones and other consumer electronics.• Miniature Analytical Instruments: Facilitating the development of compact spectroscopy devices for on-the-go analysis.• Next-Gen Medical Diagnostics: Enabling the creation of sophisticated lab-on-a-chip devices for SWIR imaging and advanced biosensing.Expanding the Horizon: 1460nm and BeyondCoreOptics isn't stopping at 1310nm. Their development of surface-emitting SWIR lasers at 1460nm further expands their reach. This wavelength offers its own set of compelling advantages, including low water absorption for long-distance fiber optic communication and high modulation capability for advanced data transmission and sensing. Potential applications span telecommunications, cutting-edge medical sensing and imaging (think non-invasive skin and glucose sensors), and precision industrial and environmental monitoring.Illuminating New Paths: Resolight RCLEDsComplementing their laser portfolio, CoreOptics also offers Resolight Resonant-Cavity Light-Emitting Diodes (RCLEDs) across a broad spectrum from 400nm to 1550nm. These aren't your average LEDs. Their resonant cavity structure dramatically enhances optical confinement, resulting in:• Highly Directional Emission: Focusing light along a specific axis, unlike the omnidirectional output of standard LEDs.• Small Divergence Angle: Enabling excellent beam collimation, minimizing light spread.• Exceptional Color Purity: The resonant cavity acts as a filter, suppressing unwanted spectral leakage for a purer monochromatic output.• Narrow Spectral Width: Achieving Full-Width at Half Maximum (FWHM) values between 10-20nm, significantly narrower than conventional LEDs.• Robust Operation: Withstanding a wide temperature range of -40 to 85 degrees Celsius.These enhanced characteristics make Resolight RCLEDs ideal for applications ranging from short-reach optical communication and high-resolution displays to the burgeoning field of visible light communication (VLC), precise optical sensing and scanning, and advanced medical and cosmetic devices.Tailored Solutions: Design Flexibility at its CoreRecognizing the diverse needs of their clientele, CoreOptics leverages its unique HCSEL platform to offer customized solutions. By precisely engineering the reflector angle based on the specific grating angles of the PIC, they can optimize performance for individual customer requirements.CoreOptics' the feature product lines, encompassing a wide range of Horizontal Cavity Surface Emitting Lasers (HCSELs) and high-performance Resonant Cavity LEDs (ResoLight), position them as a key innovator in the rapidly evolving landscape of integrated photonics.As the demand for compact, efficient, and high-performance light sources continues to surge, CoreOptics' pioneering work in surface-emitting lasers and their seamless integration onto PICs is poised to revolutionize industries and unlock a new era of photonic innovation.
MSI, a global leader in gaming and high-performance computing, announces the launch of its comprehensive AIoT solution lineup at COMPUTEX 2025, featuring AI servers, AI supercomputers, smart energy, industrial PCs, and intelligent transportation systems. The showcase extends beyond core AI computing to system integration, hardware innovation, and cross-domain applications—demonstrating MSI's leadership as a provider of AI-driven and high-performance technologies for next-generation industries."AI is no longer just a trend—it is a critical engine driving industrial intelligence," said Sam Chern, MSI Vice President of Marketing."With deep hardware R&D capabilities and a strong focus on high-performance and AI-integrated systems, MSI is expanding its reach from data centers to the edge, from energy management to smart manufacturing. Through this showcase, we hope to present our vision for a smarter, cross-domain technology future.Product Highlights¡UAI Servers × Cloud InfrastructureFull-Rack IntegrationMSI presents turnkey rack-level infrastructure, including 19" EIA, 21" OCP ORv3, and NVIDIA MGX AI racks.• The EIA rack targets dense compute environments such as private clouds and virtualization.• The OCP ORv3 rack features an open 21" chassis with 48V power delivery and OpenBMC support for hyperscale data centers.• The AI rack, aligned with NVIDIA's reference architecture, supports MGX modular systems and NVIDIA Spectrum™-X networking, enabling multi-node GPU scaling for AI training and inference. All racks are deployment-ready and thermally optimized.Core and Open Compute ServersMSI expands its DC-MHS modular server lineup with AMD-based CD270-S4051-X4/X2, CD281-S4051-X2 (compliance with the ORv3 standard) and Intel-based CD270-S3061-X4/X2, CD270-S3071-X2. All systems feature 2U4N or 2U2N form factors, PCIe Gen5, DDR5, and support up to 12 NVMe drives and 16 DIMM slots per node—designed for modular, high-density, and I/O-optimized cloud workloads.Credit: MSIAI Platforms & DGX StationMSI introduces MGX GPU servers for AI workloads:• CG480-S5063 (Intel) and CG480-S6053 (AMD) support 8 FHFL dual-width PCIe 5.0 GPUs, 32 DDR5 DIMMs, and up to 20 E1.S NVMe bays (Intel).• The CG290-S3063 (2U) supports 4 FHFL dual-width GPU and 16 DDR5 DIMMs slots in a compact design, ideal for edge inferencing and lightweight AI training.• The CT60-S8060 DGX Station, based on NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, delivers up to 20 PFLOPS of AI performance, 784GB of unified memory, and up to 800Gb/s networking—tailored for on-prem training, distributed inference, and collaborative R&D.Enterprise PlatformsMSI delivers complete solutions across DC-MHS and standard architectures:• DC-MHS platforms include CX270, CX171, CX271 series servers and HPMs such as D3066, D4056.• General-purpose systems include the S2206 (dual-socket AMD EPYC) and CS280-S3065 (Intel Xeon 6, 2U 24-bay storage).• Standard motherboards span ATX and uATX form factors, supporting Intel Xeon 6 and AMD EPYC 9005/8004/4005 processors—designed for scalable cloud, virtualization, and storage deployments.Credit: MSIIndustrial PCs × Rugged TabletsMSI debuts the EdgeXpert MS-C931, a desktop AI supercomputer built on the NVIDIA DGXTM Spark platform.Powered by the NVIDIA GB10 Grace Blackwell Superchip, it delivers 1000 AI TOPS FP4, 128GB unified memory, and ConnectX-7 networking—ideal for developers in education, finance, and healthcare sectors.Also introduced are three software tools:• MSI SysLink: Remote device management• MSI ScreenLink: EDID Lock for digital signage• MSI AI SmartLink: Seamless integration with large language modelsThese tools complement MSI's full line of Box PCs and embedded boards, designed for next-generation AI and industrial automation.MSI debuts two new rugged tablet models—the NB41 and NE21—designed to meet the demanding requirements of field workers and industrial environments.• NB41 features an 8-inch display and is equipped with an Intel Alder Lake-N processor. It is MIL-STD 810G validated, drop-tested to 1.5 meters, and rated IP65 for dust and water resistance. It supports Wi-Fi 6E, Bluetooth 5.3, and offers a 7-hour battery life with hot-swappable capability, ensuring continuous operation in mission-critical scenarios.• NE21 is an 11.6-inch model powered by a 13th Gen Intel Raptor Lake Core i processor. It features an 800-nit sunlight-readable LCD, IP65 water/dust protection, MIL-STD 810G test criteria, and 4G LTE support. With a 7.5-hour battery, it is ideal for field deployments requiring extended mobile connectivity.Both tablets support a range of accessories, including shoulder straps, hand grips, styluses, and tablet stands—providing ergonomic flexibility and operational durability for warehouse, logistics, manufacturing, and utility applications.EV Charging × Smart EnergyMSI EZgo Portable EV ChargerMSI EZgo supports up to 11kW output and features a lightweight and portable design, and IP66 waterproof/dustproof housing, which is ideal for convenient charging during travel and at home. Replaceable adapters support international standards (US/EU/TW/JP/KR/AU/Industrial), and it is UL2263 and UL817 certified. Built-in 1.8" display and Bluetooth app enhance user control. It withstands up to 2-ton enclosure weight and 20-ton cable crush force, and includes warranty and insurance support.MSI Hyper 80 DualDesigned for commercial locations with 1–2 hour parking durations, such as shopping malls, restaurants, and cinemas, the ultra-slim 80kW dual-gun DC fast charger measures just 30cm thick. With ISO 15118 and DIN 70121 support, Plug & Charge, and dynamic power distribution, it is optimized for urban and commercial charging environments. Its overhead cable management system reduces wear and extends service life.MSI EV/Eco Series & eConnect System (EMS)These residential/commercial units feature solar-ready integration, 13kW (single-phase) or 22kW (three-phase) output, and AI license plate recognition. The eConnect system (EMS) offers real-time visibility, remote control, dynamic load balancing, and multi-user billing via a user-friendly interface—recipient of the iF Design Award.Autonomous Mobile Robots (AMRs) × Smart ManufacturingMSI will highlight its latest AI-powered Autonomous Mobile Robots (AMRs) built for smart manufacturing and warehouse automation. Built on NVIDIA Jetson Orin Nano system-on-modules, NVIDIA Isaac robotics platform, and NVIDIA Omniverse, MSI AMRs enable real-time navigation, intelligent fleet coordination, and digital twin simulation.MSI will showcase two flagship AMR models, each tailored for different smart factory needs:• AMR-AI-Cobot Pro: Robotic arm with AI vision for precise material handling and assembly• AMR-AI-Base Robot: Intelligent delivery and sorting solution for automated warehouse operationsBoth models integrate advanced features such as LiDAR-based SLAM, AI-driven motion planning, and energy-efficient battery management—helping factories improve efficiency, reduce costs, and scale operations.MSI unveils its latest AMRs powered by NVIDIA Jetson Orin Nano, Isaac robotics platform, and NVIDIA Omniverse.Smart Transportation – Fleet Management SolutionsMSI is expanding into smart transportation with a diverse range of fleet management products. The product line includes fleet management tablets, telematic boxes with license plate and object recognition function, and smart rearview mirrors. Powered by edge computing and AI-driven image analysis, MSI helps logistics and commercial fleets enhance operational efficiency and safety on the road.Smart Office – AI Video Conference SystemMSI's new Smart Office solution features an all-in-one AI-powered video conference bar with 4K video, auto-tracking, voice enhancement, ANC, and AGC for clear communication. Paired with a 360-degree conference camera and control panel, it enables seamless collaboration across meeting room setups and remote teams.Exhibition InfoLocation: Booth J0506, Hall 1, Taipei Nangang Exhibition Center📅 Dates: May 20–23, 2025MSI AIoT: https://www.msi.com/to/aiotMSI AIoT Facebook: https://www.facebook.com/MSIAIoTMSI AIoT LinkedIn: https://www.linkedin.com/showcase/msi-aiotMSI Global YouTube: https://www.youtube.com/user/MSISubscribe to MSI RSS Feeds via https://www.msi.com/rss for real-time news and more product info.About MSI AIoTMSI integrated server in conformity to the cloud idea, satisfy the needs of customers in industrial computers, introduced robots for AI living and realized automotive electronics with human technology to provide the optimum solution of AIoT. It is also a leading brand in AI, business, and IoT market. For more product information, please go to https://www.msi.com/to/aiot
DEEPX ensures unmatched AI reliability with lower power, lower heat, and a total cost of ownership lower than even "free" chips.For Lokwon Kim, founder and CEO of DEEPX, this isn't just an ambition—it's a foundational requirement for the AI era. A veteran chip engineer who once led advanced silicon development at Apple, Broadcom, and Cisco, Kim sees the coming decade as a defining moment to push the boundaries of technology and shape the future of AI. While others play pricing games, Kim is focused on building what the next era demands: AI systems that are truly reliable."This white paper," Kim says, holding up a recently published technology report, "isn't about bragging rights. It's about proving that what we're building actually solves the real-world challenges faced by factories, cities, and robots—right now."Credit: DEEPXA new class of reliability for AI systemsWhile GPGPUs continue to dominate cloud-based AI training, Kim argues that the true era of AI begins not in server racks, but in the everyday devices people actually use. From smart cameras and robots to kiosks and industrial sensors, AI must be embedded where life happens—close to the user, and always on.And because these devices operate in power-constrained, fanless, and sometimes battery-driven environments, low power isn't a preference—it's a hard requirement. Cloud-bound GPUs are too big, too hot, and too power-hungry to meet this reality. On-device AI demands silicon that is lean, efficient, and reliable enough to run continuously—without overheating, without delay, and without failure."You can't afford to lose a single frame in a smart camera, miss a barcode in a warehouse, or stall a robot on an assembly line," Kim explains. "These moments define success or failure."GPGPU-based and many NPU competitor systems fail this test. With high power draw, significant heat generation, the need for active cooling, and cloud latency issues, they are fundamentally ill-suited for the always-on, low-power edge. In contrast, DEEPX's DX-M1 runs under 3W, stays below 80°C with no fan, and delivers GPU-class inference accuracy with zero latency dependency.Under identical test conditions, the DX-M1 outperformed competing NPUs by up to 84%, while maintaining 38.9°C lower operating temperatures, and being 4.3× smaller in die size.This is made possible by rejecting the brute-force SRAM-heavy approach and instead using a lean, on-chip SRAM + LPDDR5 DRAM architecture that enables:• Higher manufacturing yield• Lower field failure rates• Elimination of PCIe bottlenecks• 100+ FPS inference even on small embedded boardsDEEPX also developed its own quantization pipeline, IQ8™, preserving <1% accuracy loss across 170+ models."We've proven you can dramatically reduce power and memory without sacrificing output quality," Kim says.Credit: DEEPXReal customers. Real deployments. Real impact.Kim uses a powerful metaphor to describe the company's strategic position."If cloud AI is a deep ocean ruled by GPGPU-powered ships, then on-device AI is the shallow sea—close to land, full of opportunities, and hard to navigate with heavy hardware."GPGPU, he argues, is structurally unsuited to play in this space. Their business model and product architecture are simply too heavy to pivot to low-power, high-flexibility edge scenarios."They're like battleships," Kim says. "We're speedboats—faster, more agile, and able to handle 50 design changes while they do one."DEEPX isn't building in a vacuum. The DX-M1 is already being validated by major companies like Hyundai Robotics Lab, POSCO DX and LG Uplus, which rejected GPGPU-based designs due to energy, cost, and cooling concerns. The companies found that even "free" chips resulted in a higher total cost of ownership (TCO) than the DX-M1—once you add electricity bills, cooling systems, and field failure risks.According to Kim, "Some of our collaborations realized that switching to DX-M1 saves up to 94% in power and cooling costs over five years. And that savings scales exponentially when you deploy millions of devices."Building on this momentum, DEEPX is now entering full-scale mass production of the DX-M1, its first-generation NPU built on a cutting-edge 5nm process. Unlike many competitors still relying on 10–20nm nodes, DEEPX has achieved an industry-leading 90% yield at 5nm, setting the stage for dominant performance, efficiency, and scalability in the edge AI market.Looking beyond current deployments, DEEPX is now developing its next-generation chip, the DX-M2—a groundbreaking on-device AI processor designed to run LLMs under 5W. As large language model technology evolves, the field is beginning to split in two directions: one track continues to scale up LLMs in cloud data centers in pursuit of AGI; the other, more practical path focuses on lightweight, efficient models optimized for local inference—such as DeepSeek and Meta's LLaMA 4. DEEPX's DX-M2 is purpose-built for this second future.With ultra-low power consumption, high performance, and a silicon architecture tailored for real-world deployment, the DX-M2 will support LLMs like DeepSeek and LLaMA 4 directly at the edge—no cloud dependency required. Most notably, DX-M2 is being developed to become the first AI inference chip built on the leading-edge 2nm process—marking a new era of performance-per-watt leadership. In short, DX-M2 isn't just about running LLMs efficiently—it's about unlocking the next stage of intelligent devices, fully autonomous and truly local.Credit: DEEPXIf ARM defined the mobile era, DEEPX will define the AI EraLooking ahead, Kim positions DEEPX not as a challenger to cloud chip giants, but as the foundational platform for the AI edge—just as ARM once was for mobile."We're not chasing the cloud," he says. "We're building the stack that powers AI where it actually interacts with the real world—at the edge."In the 1990s, ARM changed the trajectory of computing by creating power-efficient, always-on architectures for mobile devices. That shift didn't just enable smartphones—it redefined how and where computing happens."History repeats itself," Kim says. "Just as ARM silently powered the mobile revolution, DEEPX is quietly laying the groundwork for the AI revolution—starting from the edge."His 10-year vision is bold: to make DEEPX the "next ARM" of AI systems, enabling AI to live in the real world—not the cloud. From autonomous robots and smart city kiosks to factory lines and security systems, DEEPX aims to become the default infrastructure where AI must run reliably, locally, and on minimal power.Everyone keeps asking about the IPO. Here's what Kim really thinks.With DEEPX gaining attention as South Korea's most promising AI semiconductor company, one question keeps coming up: When's the IPO? But for founder and CEO Lokwon Kim, the answer is clear—and measured."Going public isn't the objective itself—it's a strategic step we'll take when it aligns with our vision for sustainable success." Kim says. "Our real focus is building proof—reliable products, real deployments, actual revenue. A unicorn company is one that earns its valuation through execution—especially in semiconductors, where expectations are unforgiving. The bar is high, and we intend to meet it."That milestone, Kim asserts, is no longer far away. In other words, DEEPX isn't rushing for headlines—it's building for history. DEEPX isn't just designing chips—it's designing trust.In an AI-powered world where milliseconds can mean millions, reliability is everything. As AI moves from cloud to edge, from theory to infrastructure, the companies that will define the next decade aren't those chasing faster clocks—but those building systems that never fail."We're not here to ride a trend," Kim concludes. "We're here to solve the hardest problems—the ones that actually matter."Credit: DEEPXWhen Reliability Matters Most—Industry Leaders Choose DEEPXVisit DEEPX at booth 4F, L0409 from May 20-23 at Taipei Nangang Exhibition Center to witness firsthand how we're setting new standards for reliable on-device AI.For more information, you can follow DEEPX on social media or visit their official website.