Nvidia Chief Executive Officer Jensen Huang has laid out the company’s plans to make the powerful and expensive supercomputers used to develop AI technologies like ChatGPT available for rent to nearly any business.
The price of the rental will be $37,000 a month for eight of Nvidia’s flagship A100 or H100 chips strung together – offering it to a wider swath of business customers could accelerate an AI boom that has driven Nvidia shares up 77 percent this year, making it about five times more valuable than longtime rival Intel.
Nvidia’s new rental service, called DGX Cloud, could give many more developers the chance to access tens of thousands of its chips at once. Biotech firm Amgen and software firm ServiceNow have started using the service, Nvidia said.
DGX Cloud is optimized to run NVIDIA AI Enterprise, the world’s leading acceleration software suite for development and deployment of AI.
NVIDIA is partnering with cloud service providers to host DGX Cloud infrastructure, starting with Oracle Cloud Infrastructure. Microsoft Azure is expected to begin hosting DGX Cloud next quarter, and the service will soon expand to Google Cloud and more.
The Santa Clara, California-based company already dominates the field for artificial intelligence chips and has helped partners like Microsoft build huge systems for ChatGPT creator OpenAI’s services to answer questions with human-like text and generate images from prompts.
At Nvidia’s annual software developer conference on Tuesday, Huang said the company was working with partners such as Oracle to offer access to Nvidia’s DGX supercomputers with as many as 32,000 of Nvidia’s chips to anyone who can log on with a web browser.
Nvidia is also working with Microsoft and Alphabet to offer its supercomputers, used to create new AI products, as a service.
Nvidia on Tuesday announced new chips and software designed to make products like chatbots much cheaper to operate on a day-to-day basis after they have been created with supercomputers.
Those products are years ahead of the competition, said Hans Mosesmann, a semiconductors analyst at Rosenblatt Securities. Nvidia’s leadership on the software side of AI is not only monumental – it is accelerating.
Nvidia is also partnering with AT&T to make dispatching trucks more efficient, collaborating with quantum computing researchers to speed software development.
Nvidia is also working with industry giant Taiwan Semiconductor Manufacturing (TSMC) to speed up chip development.
Nvidia also launched a service called AI Foundations to help companies train their customized artificial intelligence models. Several major owners of stock image databases plan to use the service, which would avert legal questions about copyright of images used to generate AI content.
Nvidia also announced technology to speed up the design and manufacturing of semiconductors. The software uses Nvidia’s chips to speed up a step that sits between the software-based design of a chip and the physical fabrication of the lithography masks used to print that design on a piece of silicon.
Those calculations could take a traditional computing chip two weeks to complete, but Nvidia said Tuesday its chips and software can handle the task overnight and reduce the electricity used in the task from 35 megawatts to 5 megawatts.
Nvidia said it was working with ASML Holding, Synopsys and TSMC to bring it to market. TSMC will start readying the technology for production in June.