CONNECT WITH US
Wednesday 14 May 2025
VIA Labs presents USB4 gaming dock reference design compliant with the latest EU EUP/ERP regulations
VIA Labs, Inc. (VLI), a leading provider of USB4, SuperSpeed USB, USB PD Controllers, and Display Controllers, has unveiled a USB4 gaming dock reference design that meets the latest EU EUP/ERP energy efficiency requirements. The design features VL832, a USB-IF certified USB4 Device Controller; VL109, a USB PD 3.2 certified Power Delivery Controller; and VL605, a DisplayPort to HDMI 2.1 Protocol Converter. VIA Labs will showcase its latest product solution at Computex 2025 from May 20 to 23 at the Nangang Exhibition Center, Hall 1, USB-IF Community – VIA Labs, Inc. (Booth N0313).Since the launch of Nintendo's Switch in 2017, which became a massive hit with its support for both handheld and TV modes, the market has seen the emergence of various gaming handheld consoles. Valve's Steam Deck has attracted significant attention due to its impressive ability to run PC games. Subsequently, gaming handhelds such as ASUS ROG Ally, MSI CLAW, Lenovo Legion Go, and Acer Nitro Blaze series have also been released, rapidly expanding the handheld gaming market. There are even speculations that Xbox and PlayStation will launch handheld gaming consoles in 2027. These gaming handhelds are equipped with powerful and efficient graphics processors, capable of running various AAA game titles while using battery power. They also support VRR (Variable Refresh Rate) technology, which enhances the gaming experience by eliminating screen tearing, reducing input lag, and improving overall game smoothness. Moreover, these handhelds support USB-C connectivity, allowing users to switch from handheld mode to TV mode and enjoy gaming on a television or gaming monitor for a more immersive visual experience. Most of these gaming devices also support USB4 or Intel Thunderbolt, enabling high-speed video, audio, and USB data transmission at up to 40Gbps.Concurrently, within the wider regulatory environment for electronic goods, the EU's EUP (Energy-Using Products) and ERP (Energy-Related Products) regulations are also evolving. These aim to reduce the environmental impact of energy-consuming products and improve energy efficiency. The updated regulation (EU) 2023/826 expands the scope of control and imposes stricter power consumption limits for electronic and electrical equipment in off, standby, and networked standby modes. Starting May 9, 2025, products must comply with the new requirements, with power consumption not exceeding 0.5 watts in standby or off mode. From 2027 onward, the limit will be further reduced to 0.3 watts.In response to emerging trends in the gaming market and the upcoming EU 2027 EUP/ERP regulations, VIA Labs has introduced a USB4 gaming dock reference design that delivers new levels of functionality and performance while meeting these stringent energy efficiency standards. The design features the VL109, a 3-port USB PD 3.2 controller optimized for ultra-low standby power consumption. It supports the USB4 device controller VL832, enabling USB4 40Gbps or legacy DisplayPort Alternate Mode, and also provides USB PD EPR charge-through, delivering up to 140W of power to the host. In standby mode—when the upstream port is not connected to host—the design intelligently manages power to other key components on the board, reducing total power consumption to only 10 milliwatts. This is significantly below the 300 milliwatt limit mandated by the 2027 EUP/ERP standby/off-mode requirements, offering very large margin for developers to implement additional features or functionality.To provide flexible yet cost-effective display options for gamers, this reference design features an HDMI 2.1 port and a USB-C port supporting DisplayPort Alternate Mode. It avoids the cost of a DP MST hub (required for simultaneous dual output) by activating video output on a first-come, first-served basis. Gamers can easily connect to either HDMI Displays or the growing range of USB-C portable monitors. Through VL832's USB4 DP tunneling mode, either interface can deliver high refresh rates of up to 4K@240Hz.Another highlight of this reference design is the VL605, a fully VRR-capable HDMI 2.1 Protocol Converter. The VL605 converts DisplayPort signals from the VL832 into HDMI 2.1 output at up to 8K@60Hz or 4K@240Hz. It preserves VRR capabilities by translating DisplayPort's Adaptive Sync for seamless operation over HDMI 2.1. This ensures seamless VRR functionality is enabled across AMD, Intel, and NVIDIA G-SYNC Compatible platforms. Furthermore, VL605 has passed AMD FreeSync validation testing, and its OUI (Organizationally Unique Identifier) has been incorporated into AMD's latest GPU drivers, enabling FreeSync Premium Pro functionality on compatible AMD platforms as well. Additionally, VL605 supports LPCM and various compressed audio formats. It integrates a DSC decoder and complies with the latest DisplayPort 2.1 standard, supporting both Autonomous Mode and Source-controlled Mode, which provide enhanced control capabilities for the signal source. To safeguard against malicious firmware attacks, VL605 also enables secure firmware updates via built-in ECDSA authentication.Huineng Chang, Director of Product Management Division at VIA Labs, emphasized the potential of VL605 in the gaming market. He stated: "With increasingly powerful handheld game consoles entering the market, mobile chipsets from Apple, Qualcomm, and MediaTek now support hardware-based ray tracing. This marks a shift where gamers are no longer confined to traditional consoles like Xbox or PlayStation, but can enjoy AAA titles on handheld game consoles. On the other hand, many of today's TVs support high refresh rates and Game Mode, while portable monitors have also become more popular and widely adopted. In response to these trends, VIA Labs has introduced this reference design alongside a range of USB Hub and USB PD controllers, enabling VL605 to support USB-C multifunction docks with USB PD charging and compliance with the 2027 EUP/ERP energy regulations. With VRR support across platforms, VL605 empowers handheld game consoles to transform into high-end home gaming consoles, delivering a gaming experience that rivals traditional console systems."VIA Labs VL605: https://www.via-labs.com/product_show.php?id=123VIA Labs VL832: https://www.via-labs.com/product_show.php?id=119VIA Labs Exhibition InformationComputex Taipei 2025Date: 2025/5/20-23Location: Nangang Exhibition Hall, Hall 1, Taipei, TaiwanBooth: VIA Labs, Inc. at USB-IF Community (N0313)About VIA Labs, Inc.VIA Labs, Inc. (VLI) is a leading supplier of USB4, SuperSpeed USB, USB PD Controllers, and Display Controllers. A subsidiary VIA Technologies, Inc., VLI leverages its expertise in analog circuit design, high-speed serial interfaces, and systems integration to create a rich product portfolio that includes USB Host, Hub, and Device controllers in addition to USB PD and charging controllers. VIA Labs, Inc. has demonstrated technology and industry leadership through Standards Development and bringing newly developed USB Technologies to market. www.via-labs.com
Friday 9 May 2025
Cutting power and costs: DEEPX outperforms even free chips in TCO
In today's AI economy, reliability is not optional—it's essential. AI now runs factory lines, city cameras, and delivery robots, where even a one-second pause can trigger costly failures or safety risks. Any AI system that can't operate 24/7 without human intervention is simply not viable.To succeed at the edge, AI must meet four strict demands: sub-100 ms latency, 99.999% uptime, a power budget under 20 W, and junction temperatures below 85 °C. Without these, systems overheat, slow down, or fail in the field.Credit: DEEPXArchitected for reliability: DX-M1's thermal and performance breakthroughsThe GPGPU-based AI systems fall short of these requirements. They consume over 40 W—far beyond what low-power infrastructure and mobile robot batteries can support. They also require fans, heat sinks, and vents, which add noise, cost, and new points of failure. Moreover, their dependence on remote servers introduces cloud latency and ongoing bandwidth expenses.DEEPX overturns these hurdles. The DX‑M1 chip delivers GPGPU-class accuracy while consuming less than 3 W of power. In thermal testing with YOLOv7 at 33 FPS under identical conditions, DX‑M1 maintained a stable 61.9 °C, while a leading competitor overheated to 113.5 °C—enough to trigger thermal throttling. Under maximum load, DX‑M1 sustained 75.4 °C while achieving 59 FPS, whereas the competitor reached only 32 FPS at 114.3 °C. This demonstrates that DX‑M1 delivers 84 % better performance while running 38.9 °C cooler.A key strength of the DX-M1's architecture is its balance of speed and stability. Unlike some DRAM-less NPUs that rely on bulky on-chip SRAM—often leading to overheating, slowdowns, and low manufacturing yield—DX‑M1 combines compact SRAM with high-speed LPDDR5 DRAM positioned close to the chip. This results in smoother, cooler, and more reliable AI performance, even in compact, power-constrained environments. As a result, DX-M1 reduces hardware and energy costs by up to 90 %, making it one of the most cost-effective AI chips available.Credit: DEEPXThe true cost of AI hardware: More than the price tagDEEPX recently supported two customers building AI systems for factory robots and on-site servers. At first, both companies planned to use 40 W GPGPUs. But during testing, they realized the hidden costs:Running a 40 W GPGPU nonstop for five years uses twice as much money in electricity as it costs to buy one DX-M1 chip. The heat from GPGPUs requires fans and cooling systems, which consume extra power and increase maintenance needs. Even if the GPGPU hardware were free, the total cost of operation would still be more than double compared to using DX-M1.When the companies tested multiple NPU vendors for power efficiency, heat, and accuracy, they found that DEEPX's DX-M1 was the best fit for their real-world use. Over five years, DX-M1 cuts electricity and cooling costs by about 94% compared to GPGPU-based systems. This huge saving gives companies using AI at scale a major business advantage.In short, the most cost-effective AI hardware is not the one with the lowest price tag—but the one that delivers high performance with low power, stable heat, and reliable results over time.The future of AI will be built not just on speed or model size—but on reliability. Without stable, predictable performance, AI cannot scale into the real world. In factories, cities, and autonomous machines, even a momentary delay can lead to failure, risk, or lost trust. That's why reliability isn't just important—it's foundational. DEEPX is leading this transformation by reducing risk, lowering long-term costs, and delivering AI that operates independently, safely, and without interruption.Credit: DEEPX
Friday 9 May 2025
Is your AI system built to last—or bound to fail? Only DEEPX has the answer
DEEPX ensures unmatched AI reliability with lower power, lower heat, and a total cost of ownership lower than even "free" chips.For Lokwon Kim, founder and CEO of DEEPX, this isn't just an ambition—it's a foundational requirement for the AI era. A veteran chip engineer who once led advanced silicon development at Apple, Broadcom, and Cisco, Kim sees the coming decade as a defining moment to push the boundaries of technology and shape the future of AI. While others play pricing games, Kim is focused on building what the next era demands: AI systems that are truly reliable."This white paper," Kim says, holding up a recently published technology report, "isn't about bragging rights. It's about proving that what we're building actually solves the real-world challenges faced by factories, cities, and robots—right now."Credit: DEEPXA new class of reliability for AI systemsWhile GPGPUs continue to dominate cloud-based AI training, Kim argues that the true era of AI begins not in server racks, but in the everyday devices people actually use. From smart cameras and robots to kiosks and industrial sensors, AI must be embedded where life happens—close to the user, and always on.And because these devices operate in power-constrained, fanless, and sometimes battery-driven environments, low power isn't a preference—it's a hard requirement. Cloud-bound GPUs are too big, too hot, and too power-hungry to meet this reality. On-device AI demands silicon that is lean, efficient, and reliable enough to run continuously—without overheating, without delay, and without failure."You can't afford to lose a single frame in a smart camera, miss a barcode in a warehouse, or stall a robot on an assembly line," Kim explains. "These moments define success or failure."GPGPU-based and many NPU competitor systems fail this test. With high power draw, significant heat generation, the need for active cooling, and cloud latency issues, they are fundamentally ill-suited for the always-on, low-power edge. In contrast, DEEPX's DX-M1 runs under 3W, stays below 80°C with no fan, and delivers GPU-class inference accuracy with zero latency dependency.Under identical test conditions, the DX-M1 outperformed competing NPUs by up to 84%, while maintaining 38.9°C lower operating temperatures, and being 4.3× smaller in die size.This is made possible by rejecting the brute-force SRAM-heavy approach and instead using a lean, on-chip SRAM + LPDDR5 DRAM architecture that enables:• Higher manufacturing yield• Lower field failure rates• Elimination of PCIe bottlenecks• 100+ FPS inference even on small embedded boardsDEEPX also developed its own quantization pipeline, IQ8™, preserving <1% accuracy loss across 170+ models."We've proven you can dramatically reduce power and memory without sacrificing output quality," Kim says.Credit: DEEPXReal customers. Real deployments. Real impact.Kim uses a powerful metaphor to describe the company's strategic position."If cloud AI is a deep ocean ruled by GPGPU-powered ships, then on-device AI is the shallow sea—close to land, full of opportunities, and hard to navigate with heavy hardware."GPGPU, he argues, is structurally unsuited to play in this space. Their business model and product architecture are simply too heavy to pivot to low-power, high-flexibility edge scenarios."They're like battleships," Kim says. "We're speedboats—faster, more agile, and able to handle 50 design changes while they do one."DEEPX isn't building in a vacuum. The DX-M1 is already being validated by major companies like Hyundai Robotics Lab, POSCO DX and LG Uplus, which rejected GPGPU-based designs due to energy, cost, and cooling concerns. The companies found that even "free" chips resulted in a higher total cost of ownership (TCO) than the DX-M1—once you add electricity bills, cooling systems, and field failure risks.According to Kim, "Some of our collaborations realized that switching to DX-M1 saves up to 94% in power and cooling costs over five years. And that savings scales exponentially when you deploy millions of devices."Building on this momentum, DEEPX is now entering full-scale mass production of the DX-M1, its first-generation NPU built on a cutting-edge 5nm process. Unlike many competitors still relying on 10–20nm nodes, DEEPX has achieved an industry-leading 90% yield at 5nm, setting the stage for dominant performance, efficiency, and scalability in the edge AI market.Looking beyond current deployments, DEEPX is now developing its next-generation chip, the DX-M2—a groundbreaking on-device AI processor designed to run LLMs under 5W. As large language model technology evolves, the field is beginning to split in two directions: one track continues to scale up LLMs in cloud data centers in pursuit of AGI; the other, more practical path focuses on lightweight, efficient models optimized for local inference—such as DeepSeek and Meta's LLaMA 4. DEEPX's DX-M2 is purpose-built for this second future.With ultra-low power consumption, high performance, and a silicon architecture tailored for real-world deployment, the DX-M2 will support LLMs like DeepSeek and LLaMA 4 directly at the edge—no cloud dependency required. Most notably, DX-M2 is being developed to become the first AI inference chip built on the leading-edge 2nm process—marking a new era of performance-per-watt leadership. In short, DX-M2 isn't just about running LLMs efficiently—it's about unlocking the next stage of intelligent devices, fully autonomous and truly local.Credit: DEEPXIf ARM defined the mobile era, DEEPX will define the AI EraLooking ahead, Kim positions DEEPX not as a challenger to cloud chip giants, but as the foundational platform for the AI edge—just as ARM once was for mobile."We're not chasing the cloud," he says. "We're building the stack that powers AI where it actually interacts with the real world—at the edge."In the 1990s, ARM changed the trajectory of computing by creating power-efficient, always-on architectures for mobile devices. That shift didn't just enable smartphones—it redefined how and where computing happens."History repeats itself," Kim says. "Just as ARM silently powered the mobile revolution, DEEPX is quietly laying the groundwork for the AI revolution—starting from the edge."His 10-year vision is bold: to make DEEPX the "next ARM" of AI systems, enabling AI to live in the real world—not the cloud. From autonomous robots and smart city kiosks to factory lines and security systems, DEEPX aims to become the default infrastructure where AI must run reliably, locally, and on minimal power.Everyone keeps asking about the IPO. Here's what Kim really thinks.With DEEPX gaining attention as South Korea's most promising AI semiconductor company, one question keeps coming up: When's the IPO? But for founder and CEO Lokwon Kim, the answer is clear—and measured."Going public isn't the objective itself—it's a strategic step we'll take when it aligns with our vision for sustainable success." Kim says. "Our real focus is building proof—reliable products, real deployments, actual revenue. A unicorn company is one that earns its valuation through execution—especially in semiconductors, where expectations are unforgiving. The bar is high, and we intend to meet it."That milestone, Kim asserts, is no longer far away. In other words, DEEPX isn't rushing for headlines—it's building for history. DEEPX isn't just designing chips—it's designing trust.In an AI-powered world where milliseconds can mean millions, reliability is everything. As AI moves from cloud to edge, from theory to infrastructure, the companies that will define the next decade aren't those chasing faster clocks—but those building systems that never fail."We're not here to ride a trend," Kim concludes. "We're here to solve the hardest problems—the ones that actually matter."Credit: DEEPXWhen Reliability Matters Most—Industry Leaders Choose DEEPXVisit DEEPX at booth 4F, L0409 from May 20-23 at Taipei Nangang Exhibition Center to witness firsthand how we're setting new standards for reliable on-device AI.For more information, you can follow DEEPX on social media or visit their official website.