Nvidia has announced that it plans to integrate a high-speed interconnect, called NVLink into its future GPUs, enabling GPUs and CPUs to share data five to 12 times faster than they can today.
This will eliminate a longstanding bottleneck and help pave the way for a new generation of exascale supercomputers that are 50-100 times faster than today's most powerful systems, the vendor said.
Nvidia will add NVLink technology into its Pascal GPU architecture - expected to be introduced in 2016 - following Nvidia's new Maxwell compute architecture for 2013. The new interconnect was co-developed with IBM, which is incorporating it in future versions of its Power CPUs.
With NVLink technology tightly coupling IBM Power CPUs with Nvidia Tesla GPUs, the Power data center ecosystem will be able to fully leverage GPU acceleration for a diverse set of applications, such as high performance computing, data analytics and machine learning.
Today's GPUs are connected to x86-based CPUs through the PCI Express (PCIe) interface, which limits the GPU's ability to access the CPU memory system and is four- to five-times slower than typical CPU memory systems. PCIe is an even greater bottleneck between the GPU and IBM Power CPUs, which have more bandwidth than x86 CPUs. As the NVLink interface will match the bandwidth of typical CPU memory systems, it will enable GPUs to access CPU memory at its full bandwidth.