OpenAI has decided to partner with Google Cloud to scale its compute infrastructure amid explosive demand for AI model training and inference.

While Microsoft Azure has been OpenAI’s primary cloud partner, the collaboration with Google Cloud marks a strategic diversification, enabling OpenAI to secure additional capacity as it works toward building its own chips and data centers under projects like Stargate, Reuters news report said. OpenAI and Google Cloud are yet to confirm the deal.
At the heart of deal between Google Cloud and OpenAI is the scale of computing power required to operate AI products like ChatGPT, which has rapidly grown to a $10 billion annualized revenue business.
Google Cloud’s advanced AI infrastructure, particularly its in-house tensor processing units (TPUs), offers OpenAI a powerful complement to existing resources. Google’s recent strategy to commercialize TPUs — once reserved for internal use — has attracted several high-profile AI clients, making it a compelling choice for OpenAI.
Revenue of Google Cloud has reached $12.26 billion in the first-quarter of 2025 as against $9.574 billion in Q1-2024.
Alphabet’s capital expenditures (Capex) has increased to $17.19 billion in Q1-2025 against $14.27 billion in Q4-2024, $13.06 billion in Q3-2024, $13.18 billion in Q2-2024 and $12.012 billion in Q1-2024. Google Cloud has built a robust AI infrastructure through long-term investments in its global network, which includes over two million miles of fiber and 33 subsea cables.
Google Cloud offers the industry’s broadest selection of TPUs and GPUs, supporting both training and inference at scale. The latest TPU, Ironwood, is the most powerful yet — designed specifically for large-scale inference — delivering over 10x the compute power of previous models with nearly double the energy efficiency. Google Cloud maintains a strong partnership with NVIDIA, being the first to provide access to its B200 and GB200 Blackwell GPUs, with plans to support upcoming Vera Rubin GPUs.
The deal also highlights a pragmatic shift in the AI landscape: even fierce rivals are now collaborating when it comes to infrastructure. Despite OpenAI’s threat to Google’s core search business, both companies appear willing to prioritize operational needs over competition. For OpenAI, it ensures continuity and growth; for Google, it strengthens its position in the AI infrastructure market and justifies massive capital expenditures on AI.
Ultimately, OpenAI’s selection of Google Cloud reflects both urgency and foresight—gaining flexibility, reducing dependence on a single provider, and ensuring access to best-in-class computing platforms at a critical stage of AI development.
OpenAI has demonstrated rapid growth and strong performance, driven by soaring global demand for generative AI tools. As of June 2025, OpenAI reported an annualized revenue run rate of $10 billion, fueled primarily by its flagship product, ChatGPT. The company has seen widespread adoption across enterprise and consumer markets, positioning itself as a key player in the AI industry.
To support this growth, OpenAI has aggressively expanded its computing infrastructure, moving beyond its exclusive reliance on Microsoft Azure by partnering with other providers like Google Cloud, Oracle, and CoreWeave. It is also investing in long-term projects like the $500 billion Stargate initiative and developing its own AI chips to reduce hardware dependency.
Despite competition from tech giants like Google and emerging players like Anthropic, OpenAI remains a market leader in AI innovation, product reach, and monetization.
Rajani Baburajan