Hewlett Packard Enterprise, one of the leading networking supplier, has announced the industry’s first fully fanless direct liquid cooling system architecture, aimed at enhancing the energy and cost efficiency of large-scale AI deployments.

The innovation, unveiled at HPE’s AI Day, marks a major milestone in the company’s drive to provide more sustainable and efficient solutions for AI workloads. The event was held at one of HPE’s AI systems manufacturing facilities, showcasing the company’s leadership in AI for enterprises, governments, service providers, and model builders.
The International Energy Agency (IEA) says data centers used 2 percent of all electricity in 2022. The IEA predicts that data centers’ use of electricity could more than double by 2026.
Innovation is important for the enterprise networking vendor. HPE has reported sales revenue of $7.7 billion (up 10 percent) during the third quarter ended July 31, 2024. HPE is targeting fiscal 2024 fourth quarter revenue of $8.1 billion to $8.4 billion.
Cooling Solution for AI Workloads
As AI adoption intensifies, power consumption for AI workloads continues to increase, outpacing the capabilities of traditional air-cooled systems. To address this, HPE’s direct liquid cooling technology offers a more efficient alternative, building on its expertise in delivering 7 of the top 10 energy-efficient supercomputers on the Green500 list.
HPE’s 100 percent fanless direct liquid cooling system is designed to cut cooling power consumption by up to 90 percent compared to conventional air-cooled systems, positioning it as a game-changer for AI deployments that require enhanced energy efficiency and sustainability.
Addressing the Energy and Cost Challenges of AI
Antonio Neri, President and CEO of HPE, said: “Our new liquid cooling architecture delivers greater energy and cost efficiency advantages than anything else on the market. It enables a 90 percent reduction in cooling power consumption, making it ideal for large-scale generative AI deployments.”
Martina Raveni, Analyst in the Strategic team at GlobalData, said: “The high temperature of data centers is a critical issue at the moment. If equipment overheats, malfunctions and breakdowns can occur, with repercussions for the many sectors that rely on those data centers. As demand for AI applications increases, managing these temperatures will become increasingly important.”
Key Features of HPE’s Fanless Liquid Cooling System
The architecture is built on four foundational pillars that set it apart from hybrid cooling systems:
8-Element Cooling Design: The system includes liquid cooling for critical components, such as the GPU, CPU, server blade, local storage, network fabric, and even the coolant distribution unit (CDU). This approach ensures optimized cooling for the entire system.
High-Density, High-Performance Design: Each system undergoes rigorous testing and comes with integrated monitoring software and on-site support to ensure successful deployment.
Integrated Network Fabric: The design incorporates a scalable, lower-cost, and lower-power connection system, helping reduce power consumption across massive AI infrastructures.
Open System Design: Offering flexibility in accelerator choices, the system accommodates a wide range of AI and compute workloads.
This fanless architecture reduces the cooling power required per server blade by 37 percent, cuts utility costs, and minimizes carbon emissions. Additionally, it allows for higher cabinet density, reducing the data center floor space by up to 50 percent.
Seizing AI Growth Opportunities
During HPE’s AI Day, Antonio Neri, alongside Fidelma Russo, EVP & GM of Hybrid Cloud and HPE CTO, and Neil MacDonald, EVP & GM of Server, discussed HPE’s strategic portfolio for AI, hybrid cloud, and networking. They emphasized how HPE’s leadership in liquid cooling, combined with its decades-long expertise in building large-scale IT environments, positions the company to capture growing AI demand.
Baburajan Kizhakedath