Nvidia announced several new chips and technologies that will boost computing speed of artificial intelligence (AI) algorithms, stepping up competition against rival chipmakers vying for lucrative data center business.
Nvidia’s graphic chips (GPU), which helped propel and enhance the quality of videos in the gaming market, have become the dominant chips for companies to use for AI workloads. The latest GPU, called the H100, can help reduce computing times from weeks to days for some work involving training AI models.
The announcements were made at Nvidia’s AI developers conference online.
“Data centers are becoming AI factories — processing and refining mountains of data to produce intelligence,” said Nvidia CEO Jensen Huang in a statement.
The H100 chip will be produced on Taiwan Manufacturing Semiconductor Company’s four nanometer process with 80 billion transistors and will be available in the third quarter.
The H100 will also be used to build Nvidia’s new Eos supercomputer. Nvidia said it will be the world’s fastest AI system when it begins operation later this year, Reuters news reported.
Facebook parent Meta announced in January that it would build the world’s fastest AI supercomputer this year and it would perform at nearly 5 exaflops. Nvidia on Tuesday said its supercomputer will run at over 18 exaflops.
Nvidia introduced a new processor chip (CPU) called the Grace CPU Superchip that is based on Arm technology. It’s the first new chip by Nvidia based on the Arm architecture to be announced since the company’s deal to buy Arm fell apart last month due to regulatory hurdles.
The Grace CPU Superchip, which will be available in the first half of next year, connects two CPU chips and will focus on AI and other tasks that require intensive computing power.
Earlier this month Apple unveiled its M1 Ultra chip connecting two M1 Max chips.
Nvidia said the two CPU chips were connected using its NVLink-C2C technology, which was also unveiled on Tuesday.