Facebook parent Meta Platforms announced the launch of a free software tools for artificial intelligence (AI) applications that could make it easier for developers to switch between different underlying chips.
Benchmark of AIT-A100 and AIT-MI250 on various models. MI250 is running in data parallel mode, where each GCD (GPU core) is processing half of the data. For batch size 1, the batch is processed on a single GCD with the other GCD idle.
Meta said its open-source AI platform is based on an open-source machine learning framework called PyTorch. Meta claims that it can help code run up to 12 times faster on Nvidia’s A100 chip or up to four times faster on Advanced Micro Devices’ (AMD) MI250 chip.
Software has become a battleground for chipmakers seeking to build up an ecosystem of developers to use their chips. Nvidia’s CUDA platform has been the most popular so far for artificial intelligence work, Reuters news report said.
However, once developers tailor their code for Nvidia chips, it is difficult to run it on graphics processing units, or GPUs, from Nvidia competitors like AMD. Meta said the software is designed to easily swap between chips without being locked in, Meta said in a blog post.
The GPU back-end support gives deep learning developers more hardware vendor choices with minimal migration costs, Meta said in its blog post.
Meta said proprietary software toolkits such as TensorRT provide ways of customization, but they are often not enough to satisfy the needs of software developers.
“It’s a testament to the importance of software, particularly for deploying neural networks in machine learning for inference,” said David Kanter, a founder of MLCommons, an independent group that measures AI speed.