infotechlead

Mellanox launches FDR InfiniBand solution with NVIDIA GPUDirect RDMA support

Infotech Lead America:  Mellanox Technologies, a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, has launched FDR InfiniBand solution with support for NVIDIA GPUDirect remote direct memory access (RDMA) technology.

The next generation of NVIDIA GPUDirect technology provides application performance and efficiency for GPU-accelerator based high-performance computing (HPC) clusters.

NVIDIA GPUDirect RDMA technology dramatically accelerates communications between GPUs by providing a direct peer-to-peer communication data path between Mellanox’s scalable HPC adapters and NVIDIA GPUs. This reduces GPU-GPU communication latency and completely offloads the CPU and system memory subsystem from all GPU-GPU communications across the network.

The latest performance results from Ohio State University demonstrated MPI latency reduction of 69 percent, from 19.78us to 6.12us, when moving data between InfiniBand-connected GPUs, while overall throughput for small messages increased by 3X and bandwidth performance increased by 26 percent for larger messages.

The performance testing was done using MVAPICH2 software from The Ohio State University’s Department of Computer Science and Engineering, which delivers high performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand.

MVAPICH2 software powers numerous supercomputers in the TOP500 list, including the 7th largest multi-Petaflop TACC Stampede system with 204,900 cores interconnected by Mellanox FDR 56Gb/s InfiniBand.

“The ability to transfer data directly to and from GPU memory dramatically speeds up system and application performance, enabling users to run computationally intensive code and get answers faster than ever before,” said Gilad Shainer, vice president of marketing at Mellanox Technologies.

Mellanox’s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA ensures the highest level of application performance, scalability and efficiency for GPU-based clusters

Mellanox ConnectX and Connect-IB based adapters are the world’s only InfiniBand solutions that provide full offloading capabilities critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters, the company said.

Combined with NVIDIA GPUDirect RDMA technology, Mellanox InfiniBand solutions are driving HPC environments to new levels of performance and scalability.

editor@infotechlead.com

Latest

More like this
Related

Supermicro intros single-socket servers for enhanced data center efficiency

Supermicro, a leading provider of IT solutions specializing in...

What’s the number of data centers operated by hyperscale providers?

New data from Synergy Research Group shows that the...

NTT DATA hikes investment focusing on digital infrastructure and AI in India

NTT DATA is accelerating its investment in India with...

TikTok plans $8.8 bn investment in Thailand’s data center infrastructure

ByteDance’s TikTok has announced an investment of $8.8 billion...