Mellanox Supports Nvidia’s GPUDirect RDMA with FDR Infiniband Interconnect

By - Source: Toms IT Pro

Mellanox is following up on its initial Nvidia GPUDirect interconnect that is designed to remove a communication bottleneck in GPU-to-GPU communication in cluster environments.  

According to the company, FDR Infiniband integrates support of remote direct memory access(RDMA) and accelerates data transfer between GPUs via a provided P2P data path between Mellanox’s scalable HPC adapters and Nvidia’s graphics processors.  Compared to the initial ConnectX Infiniband adapters, which were launched in May 2010, the new FDR InfiniBand drops MPI latency from 19.78us to 6.12us, while overall throughput for small messages increased by 3X and bandwidth performance increased by 26 percent for larger messages, Mellanox claims.

“The ability to transfer data directly to and from GPU memory dramatically speeds up system and application performance, enabling users to run computationally intensive code and get answers faster than ever before,” said Gilad Shainer, Vice President of Marketing at Mellanox, in a prepared statement.

Catering to the GPUs ability to excel in floating point operations, the company said that GPU clusters work well in application areas such as seismic processing, computation fluid dynamics and molecular dynamics.

Developers can access alpha-code to take advantage of GPUDirect RDMA which is available via Nvidia’s latest CUDA developer kit version and R270 drivers.

Wolfgang Gruener is a contributor to Tom's IT Pro. He is currently principal analyst at Ndicio Research, a market analysis firm that focuses on and disruptive technologies. An 18-year veteran in IT journalism and , he previously published TG Daily and was editor of Tom's Hardware news, which he grew from a link collection in the early 2000s into one of the most comprehensive and trusted technology news sources.

See here for all of Wolfgang's Tom's IT Pro articles.

Comments