infotechlead
infotechlead

Mellanox launches FDR InfiniBand solution with NVIDIA GPUDirect RDMA support

Infotech Lead America:  Mellanox Technologies, a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, has launched FDR InfiniBand solution with support for NVIDIA GPUDirect remote direct memory access (RDMA) technology.

The next generation of NVIDIA GPUDirect technology provides application performance and efficiency for GPU-accelerator based high-performance computing (HPC) clusters.

NVIDIA GPUDirect RDMA technology dramatically accelerates communications between GPUs by providing a direct peer-to-peer communication data path between Mellanox’s scalable HPC adapters and NVIDIA GPUs. This reduces GPU-GPU communication latency and completely offloads the CPU and system memory subsystem from all GPU-GPU communications across the network.

The latest performance results from Ohio State University demonstrated MPI latency reduction of 69 percent, from 19.78us to 6.12us, when moving data between InfiniBand-connected GPUs, while overall throughput for small messages increased by 3X and bandwidth performance increased by 26 percent for larger messages.

The performance testing was done using MVAPICH2 software from The Ohio State University’s Department of Computer Science and Engineering, which delivers high performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand.

MVAPICH2 software powers numerous supercomputers in the TOP500 list, including the 7th largest multi-Petaflop TACC Stampede system with 204,900 cores interconnected by Mellanox FDR 56Gb/s InfiniBand.

“The ability to transfer data directly to and from GPU memory dramatically speeds up system and application performance, enabling users to run computationally intensive code and get answers faster than ever before,” said Gilad Shainer, vice president of marketing at Mellanox Technologies.

Mellanox’s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA ensures the highest level of application performance, scalability and efficiency for GPU-based clusters

Mellanox ConnectX and Connect-IB based adapters are the world’s only InfiniBand solutions that provide full offloading capabilities critical to avoiding CPU interrupts, data copies and systems noise, while maintaining high efficiencies for GPU-based clusters, the company said.

Combined with NVIDIA GPUDirect RDMA technology, Mellanox InfiniBand solutions are driving HPC environments to new levels of performance and scalability.

editor@infotechlead.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

More like this
Related

Amazon to Invest $15 bn in Northern Indiana for AI and Cloud Computing Data Centers

Amazon Web Services (AWS) has announced a $15 billion...

TCS Partners with TPG for HyperVault AI Data Center Expansion with Rs 8,820 Crore Investment

Tata Consultancy Services (TCS) has announced a partnership with...

GoDaddy Ordered to Pay $170 mn to Express Mobile for Patent Infringement in Website-Building Technology

GoDaddy, Internet domain registrar and web hosting service provider,...

CSP Capex to Surpass $600 bn in 2026, Fueling AI Hardware Ecosystem Growth: TrendForce

TrendForce expects global cloud service provider (CSP) capital expenditure...