CONNECT WITH US

Flex Logix pairs InferX X1 inference accelerator with high-bandwidth Winbond 4Gb LPDDR4X to set new benchmark in edge AI

Press release

Winbond Electronics Corporation, a leading global supplier of semiconductor memory solutions, has revealed that its low-power, high-performance LPDDR4X DRAM technology is supporting the latest breakthrough in edge computing from Flex Logix for demanding artificial intelligence (AI) applications such as object recognition.

The Winbond LPDDR4X chip is being paired with Flex Logix's InferX X1 edge inference accelerator chip, which is based on an innovative architecture that features arrays of reconfigurable Tensor Processors. This provides higher throughput and lower latency at lower cost than existing AI edge computing solution when processing complex neural networking algorithms such as YOLOv3 or Full Accuracy Winograd.

"We chose the Flex Logix InferX X1 edge accelerator because it delivered the highest throughput per dollar, which is critical to drive volume mainstream applications," said Robert Chang, Technology Executive of DRAM Product Marketing Center at Winbond. "The price/performance advantage of using InferX with our LPDDR4X chip has the potential to significantly expand AI applications by finally bringing inference capabilities to the mass market."

To support the InferX X1's ultra-high speed operation while keeping power consumption to a minimum, and Max 7.5TOPS, Flex Logix has paired the accelerator with the W66CQ2NQUAHJ from Winbond, a 4Gb LPDDR4X DRAM which offers a maximum data rate of 4267Mbps at a maximum clock rate of 2133MHz. To enable use in battery-powered systems and other power-constrained applications, the W66 series device operates in active mode from 1.8V/1.1V power rails, and from a 0.6V supply in quiescent mode. It offers power-saving features including partial array self-refresh.

The Winbond LPDDR4X chip operates alongside the InferX X1 processor in Flex Logix's half-height/half-length PCIe embedded processor board for edge servers and gateways. The system takes advantage of Flex Logix's architectural innovations, such as reconfigurable optimized data paths which reduce the traffic between the processor and DRAM, to increase throughput and reduce latency.

Dana McCarty, VP of Sales & Marketing for Flex Logix's AI Inference Products said, "The combination of the unique InferX X1 processor and Winbond's high-bandwidth LPDDR4X chip sets a new benchmark in edge AI performance. Now for the first time, affordable edge computing systems can implement complex neural networking algorithms to achieve high accuracy in object detection and image recognition even when processing data-intensive high-definition video streams."

The 4Gb W66CQ2NQUAHJ is comprised of two 2Gb dies in a two-channel configuration. Each die is organized into eight internal banks which support concurrent operation. The chip is housed in a 200-ball WFBGA package which measures 10mm x 14.5mm.

For more information about 1Gb SDP (CS in H1'22), 2Gb SDP, 4Gb LPDDR4/LPDDR4X products, go to www.winbond.com.

For more information about the InferX X1 edge inference accelerator, go to flex-logix.com/inference.

Winbond's low-power, high-performance LPDDR4X DRAM

Winbond's low-power, high-performance LPDDR4X DRAM

DIGITIMES' editorial team was not involved in the creation or production of this content. Companies looking to contribute commercial news or press releases are welcome to contact us.