OpenAI has announced a partnership with Broadcom to co-develop and deploy its first in-house artificial intelligence processors, marking a significant milestone in the AI infrastructure race.
OpenAI will design custom AI accelerators and systems, while Broadcom will handle development and large-scale deployment starting in the second half of 2026.
The deal covers 10 gigawatts of custom chips — equivalent to powering more than 8 million U.S. homes — with complete rollout expected by the end of 2029. This initiative aims to meet global demand for AI computing power and reduce reliance on Nvidia’s dominant GPU ecosystem.
Broadcom shares surged over 10 percent following the announcement, reflecting strong investor optimism about its growing role in the AI chip supply chain, Reuters news report said.
The collaboration reflects Broadcom’s leadership in Ethernet-based networking solutions, as the new AI racks will use its portfolio of Ethernet, PCIe, and optical connectivity components, challenging Nvidia’s InfiniBand networking technology.
Sam Altman, OpenAI’s CEO, called the partnership “a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses.” He added that developing custom accelerators will enable OpenAI to embed insights from frontier AI model development directly into the hardware.
Broadcom CEO Hock Tan said the alliance marks “a pivotal moment in the pursuit of artificial general intelligence,” while OpenAI President Greg Brockman emphasized that the new chips will “unlock new levels of capability and intelligence.”
The collaboration comes amid a wave of multibillion-dollar chip investments by OpenAI. Recently, the company announced a 6-gigawatt AI chip supply deal with AMD. In addition, Nvidia plans to invest up to $100 billion in OpenAI’s infrastructure.
With over 800 million weekly active users, OpenAI is scaling to support enterprise, small business, and developer adoption worldwide. The Broadcom partnership positions it among global cloud leaders like Google, Amazon, and Microsoft in the race to build custom AI hardware and ensure the computing capacity needed for the next generation of intelligent systems.
Rajani Baburajan

