OpenAI is partnering with Broadcom and TSMC to build its first in-house AI chip to support its systems, aiming to diversify its chip supply and reduce costs. AMD chips will also be added to OpenAI’s infrastructure alongside Nvidia GPUs, Reuters news report said.

Initially, OpenAI explored creating its own chip-manufacturing foundries. However, due to high costs and long timelines, it opted to focus on designing in-house chips and leveraging industry partnerships instead.
OpenAI’s approach, blending in-house and external resources, is similar to strategies of big tech companies like Amazon, Google, and Microsoft, potentially signaling a shift in chip supply strategies across the tech sector.
Broadcom assists OpenAI in chip design and facilitates manufacturing through TSMC, with plans to launch OpenAI’s first chip by 2026.
While training chips remain in demand, analysts predict that demand for inference chips could rise as AI applications expand. OpenAI does not share details on its spending on tech infrastructure.
OpenAI will use AMD’s MI300X chips via Microsoft’s Azure, a strategic attempt by AMD to capture more market share in AI, which Nvidia dominates with an over 80 percent market share.
Operating services like ChatGPT and training AI models are costly. OpenAI anticipates a $5 billion loss this year on projected revenue of $3.7 billion, primarily driven by high computing costs.
Although diversifying, OpenAI remains cautious about poaching talent from Nvidia to maintain a positive relationship, especially as it looks to access Nvidia’s latest Blackwell chips.