Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.
 

Mellanox Supports Nvidia GPUDirect RDMA Technology

By - Source: Mellanox

Mellanox Technologies announced on Monday that Nvidia GPUDirect RDMA technology, which is supported on Nvidia Tesla K40 and K20 series GPU accelerators, is now supported on Mellanox Connect-IB InfiniBand adapters. Beta-level support for Nvidia GPUDirect RDMA and MVAPICH2-2.0b-GDR will be publically available this quarter with the upcoming MLNX_OFED 2.1 release.

"We see increased adoption of FDR InfiniBand and Nvidia GPUDirect RDMA technology by leading commercial partners, government agencies, as well as academia and research institutions," said Gilad Shainer, vice president of marketing at Mellanox Technologies. "Mellanox's FDR InfiniBand solutions with NVIDIA GPUDirect RDMA are providing the highest level of application performance, scalability and efficiency for GPU-based clusters."

The combined solution promises a 67 percent reduction in small message latency, a 10 percent reduction in large message latency, and a 5X bandwidth improvement for small messages with Connect-IB. Additional listed features include support for RDMA over InfiniBand and Ethernet (RoCE), and multi-rail capabilities for Nvidia GPUDirect RDMA with MVAPICH2.

"With 12 GB of ultra-fast GDDR5 memory and support for PCIe Gen 3 interconnect technology, the new Tesla K40 accelerators are ideal for ultra-large scale scientific and commercial workloads," said Ian Buck, vice president of Accelerated Computing at Nvidia. "When coupled with NVIDIA GPUDirect RDMA technology, Mellanox InfiniBand solutions unlock new levels of performance for HPC customers by enabling direct memory access from the GPU across the InfiniBand fabric."

About the Author

Kevin Parrish is a contributing editor and writer for Tom's Hardware, Tom's Games, Tom's Guide and Tom’s IT Pro. He's also a graphic artist, CAD operator and network administrator.

More from Kevin Parrish

Mellanox also announced on Monday that its Connect-IB FDR 56 Gb/s InfiniBand adapters were benchmarked on multiple applications, such as WIEN2k, a quantum mechanical simulation software, and WRF, a Weather Research and Forecasting simulation software. The company revealed that the adapters demonstrated "higher performance" of up to 200 percent with forty compute nodes compared to competition's QDR 40 Gb/s InfiniBand solutions.

"Mellanox's dual-port Connect-IB FDR 56 Gb/s InfiniBand with PCI Express 3.0 x16 adapter delivers 3X higher message rate over competing solutions, enabling 137 million messages per second," the company's press release states. "It is the world’s fastest interconnect adapter, and, when used with Mellanox end-to-end FDR InfiniBand solutions (switches and cables), it delivers best-in-class application efficiency and scalability."

Mellanox's FDR 56Gb/s InfiniBand solution is available today, and includes ConnectX-3 and Connect-IB adapter cards, SwitchX-2 based switches (from 12-port to 648-port), fiber and copper cables, and ScalableHPC accelerator and management software.

Comments