Intel says Habana Gaudi2 processors outperform Nvidia’s A100

Intel revealed that its Habana Gaudi2 deep learning processors have outperformed Nvidia’s A100 submission for AI time-to-train on the MLPerf industry benchmark.
Intel Habana Gaudi 2 deep learning processorsIntel’s data center team focused on deep learning processor technologies, enables data scientists and machine learning engineers to accelerate training and build new or migrate existing models with a few lines of code to enhance productivity, as well as lower operational costs.

Gaudi2 delivers advancements in time-to-train (TTT) over first-generation Gaudi and enabled Habana’s May 2022 MLPerf submission to outperform Nvidia’s A100-80G for eight accelerators on vision and language models.
Nvidia A100
For ResNet-50, Gaudi2 delivers a 36 percent reduction in time-to-train as compared to Nvidia’s TTT for A100-80GB and a 45 percent reduction compared to an A100-40GB 8-accelerator server submission by Dell for both ResNet-50 and BERT.
Intel Gaudi2 performance vs Nvidia vs DellGaudi2 achieves a 3x speed-up in training throughput for ResNet-50 and 4.7x for BERT. These advances can be attributed to the transition to 7-nanometer process from 16 nm, tripling the number of Tensor Processor Cores, increasing the GEMM engine compute capacity, tripling the in-package high bandwidth memory capacity, increasing bandwidth and doubling the SRAM size.

For vision models, Gaudi2 has a new feature in the form of an integrated media engine, which operates independently and can handle the entire pre-processing pipe for compressed imaging, including data augmentation required for AI training.

“Delivering best-in-class performance in both vision and language models will bring value to customers and help accelerate their AI deep learning solutions,” Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group, said.