Microsoft made waves on Wednesday by introducing a pair of tailor-made computing chips, following a trend among major tech companies to bring essential technologies in-house to tackle the rising costs associated with delivering artificial intelligence (AI) services.
The US-based technology giant clarified that it doesn’t intend to sell these chips externally. Instead, Microsoft will leverage them to fuel its subscription software offerings and fortify its Azure cloud computing service.
Microsoft’s global IaaS public cloud services market share was 21.5 percent in 2022, an increase from 20.6 percent in 2021, according to research firm Gartner.
During its Ignite developer conference in Seattle, Microsoft unveiled Maia, a new chip engineered to accelerate AI computing tasks. This chip will serve as the backbone for its $30-per-month “Copilot” service for business software users and developers seeking to create customized AI services.
By 2025, 30 percent of enterprises will have implemented an AI-augmented development and testing strategy, up from 5 percent in 2021, according to research firm Gartner.
Maia has been meticulously designed to power large language models, a crucial component in Microsoft’s Azure OpenAI service, developed through a collaboration with OpenAI, the creators of ChatGPT.
Acknowledging the high cost of delivering AI services, which can exceed those of traditional services like search engines by tenfold, Microsoft outlined its strategy to mitigate these expenses. Executives shared plans to streamline AI efforts by channeling them through a common set of foundational AI models. Maia is optimized precisely for this purpose.
Scott Guthrie, the executive vice president of Microsoft’s cloud and AI group, emphasized the potential of Maia: “We think this gives us a way that we can provide better solutions to our customers that are faster and lower cost and higher quality.”
Additionally, Microsoft disclosed plans to offer its Azure customers cloud services operating on the latest flagship chips from Nvidia and Advanced Micro Devices (AMD), showcasing their intent to test GPT-4, OpenAI’s most advanced model, on AMD’s chips.
The company’s second chip, Cobalt, operates as a central processing unit (CPU) and represents an internal cost-saving measure. Designed using technology from Arm Holdings, Cobalt aims to compete with Amazon Web Services’ (AWS) “Graviton” series of in-house chips.
Microsoft aims to not only utilize Cobalt to power its internal operations, such as Teams, its business messaging tool, but also plans to offer direct access to Cobalt to challenge AWS in the cloud service chip market. Guthrie emphasized Microsoft’s focus on competitiveness, stating, “We are designing our Cobalt solution to ensure that we are very competitive both in terms of performance as well as price-to-performance (compared with Amazon’s chips).”
While specific technical details were scarce, Microsoft noted that both Maia and Cobalt chips are manufactured using 5-nanometer technology from Taiwan Semiconductor Manufacturing Co (TSMC). Rani Borkar, corporate vice president for Azure hardware systems and infrastructure, highlighted Maia’s networking setup using standard Ethernet cabling, contrasting it with Nvidia’s pricier custom networking technology previously utilized in supercomputers built for OpenAI.