Hewlett Packard Enterprise (HPE) announced the launch of a cloud computing service tailored to support artificial intelligence (AI) systems similar to ChatGPT.
The enterprise networking major is offering HPE GreenLake for Large Language Models (LLMs) service to a select group of customers. HPE plans to expand its availability in North America by the end of this year and in Europe next year.
This move puts HPE in direct competition with top cloud computing providers such as Amazon.com, Microsoft, and Alphabet Inc’s Google. These industry giants are all racing to transform their vast data centers to accommodate the growing demand for AI-backed services like chatbots and image generators, which are attracting millions of users.
HPE GreenLake for LLMs will be delivered in partnership with HPE’s first partner Aleph Alpha, a German AI startup, to provide users with a field-proven and ready-to-use LLM to power use cases requiring text and image processing and analysis.
The emergence of AI is reshaping the cloud computing market, as data centers must be reconfigured to handle the unique requirements of AI workloads. In a traditional cloud computing data center, software is utilized to divide a single physical server into multiple smaller “virtual” machines that can be rented out to customers. However, AI-focused data centers follow a different approach, aiming to connect hundreds or even thousands of computers to create a unified and powerful computing resource.
HPE has been developing this type of technology for years, notably for projects like the Frontier supercomputer, which the company collaborated on with the Oak Ridge National Laboratory in the United States. The Frontier supercomputer currently holds the title of the world’s fastest computer. Drawing from its expertise in supercomputers, HPE plans to leverage its experience to offer a dedicated service for large language models, the underlying technology behind services like ChatGPT.
Justin Hotard, executive vice president and general manager of HPE’s high-performance computing and artificial intelligence unit, expressed the company’s intention to utilize its knowledge in supercomputing to deliver a specialized service that caters specifically to large language models and their computational requirements.
HPE GreenLake for LLMs will include access to Luminous, a pre-trained large language model from Aleph Alpha, which is offered in multiple languages, including English, French, German, Italian and Spanish. The LLM allows customers to leverage their own data, train and fine-tune a customized model, to gain real-time insights based on their proprietary knowledge.
“By using HPE’s supercomputers and AI software, we efficiently and quickly trained Luminous, a large language model for critical businesses such as banks, hospitals, and law firms to use as a digital assistant to speed up decision-making and save time and resources,” said Jonas Andrulis, founder and CEO, Aleph Alpha.
HPE also announced an expansion to its AI inferencing compute solutions. The new HPE ProLiant Gen11 servers are optimized for AI workloads, using NVIDIA H100 and L4 Tensor Core GPUs as well as L40 GPUs. The HPE ProLiant DL380a and DL320 Gen11 servers boost AI inference performance by more than 5X over previous models.