Google Unveils New AI Chips and Arm-Based CPU for Data Centers

Google has announced the launch of its latest version of data center artificial intelligence chips alongside introducing an Arm-based central processing unit (CPU). This move underscores Google’s efforts to bring innovation in the data center technology.
Google Axion ProcessorsGoogle’s tensor processing units (TPUs) have emerged as a notable alternative to Nvidia’s advanced AI chips, albeit with access primarily limited to Google’s Cloud Platform rather than direct purchase availability, Reuters news report said.

The company aims to democratize access to these technologies by offering the Arm-based CPU, dubbed Axion, through Google Cloud. According to Google, Axion surpasses the performance of x86 chips and general-purpose Arm chips in the cloud.

Mark Lohmeyer, Google Cloud’s vice president and general manager of compute and machine learning infrastructure, emphasized the ease of transition for customers to adopt Axion. He stated, “We’re making it easy for customers to bring their existing workloads to Arm… without re-architecting or re-writing their apps.”

The move aligns with trends in the industry where rival cloud operators like Amazon.com and Microsoft have also invested in developing Arm CPUs to diversify their computing services. While Google has previously engineered custom chips for various purposes including YouTube and AI, the introduction of a CPU marks a new milestone for the tech giant.

Notably, the new TPU v5p chip, designed to operate in pods of 8,960 chips, promises twice the raw performance compared to its predecessor. To optimize performance, Google employs liquid cooling within the pods.

Axion chip

Axion processors boast up to 30 percent better performance compared to the fastest general-purpose Arm-based instances currently available in the cloud. Additionally, they offer up to 50 percent improved performance over current-generation x86 chips from Intel and Advanced Micro Devices (AMD). They also offer up to 60 percent enhanced energy efficiency compared to comparable x86-based instances, Amin Vahdat, VP/GM, Machine Learning, Systems, and Cloud AI, said.

“Google’s announcement of the new Axion CPU marks a significant milestone in delivering custom silicon that is optimized for Google’s infrastructure, and built on our high-performance Arm Neoverse V2 platform,” Rene Haas, CEO of Arm, said.

The deployment of Google services such as BigTable, Spanner, BigQuery, Blobstore, Pub/Sub, Google Earth Engine, and the YouTube Ads platform on current-generation Arm-based servers has already commenced. Moreover, plans are underway to scale these services and more on the Axion platform in the near future.

The Axion processors, built using the Arm Neoverse V2 CPU, are tailored to excel in a variety of workloads, including web and app servers, containerized microservices, open-source databases, data analytics engines, media processing, and CPU-based AI training and inferencing.

Key to Axion’s exceptional performance is Titanium, a system of custom silicon microcontrollers and tiered scale-out offloads. Titanium offloads handle platform operations such as networking and security, freeing up Axion processors to dedicate more capacity and achieve improved performance for customer workloads. Furthermore, Titanium offloads storage I/O processing to Hyperdisk, a novel block storage service that offers dynamic provisioning in real time, decoupling performance from instance size.

Google Cloud’s commitment to efficiency is evident in its data centers, which are already 1.5 times more efficient than the industry average. Over the past five years, these data centers have achieved a threefold increase in computing power while consuming the same amount of electrical power.

Axion processors are built on the standard Armv9 architecture and instruction set, ensuring compatibility with existing software ecosystems. Google recently contributed to SystemReady Virtual Environment (VE), an interoperability standard developed by Arm, facilitating seamless operation of common operating systems and software packages on Arm-based servers and virtual machines. This standardization minimizes the need for extensive code rewrites when deploying Arm workloads on Google Cloud.

Customers will soon be able to leverage Axion processors across various Google Cloud services, including Google Compute Engine, Google Kubernetes Engine, Dataproc, Dataflow, Cloud Batch, and more. Arm-compatible software and solutions are readily available on the Google Cloud Marketplace, with preview support for Arm-based instance migration recently launched in the Migrate to Virtual Machines service.

Currently, Axion is already in use across several Google services such as YouTube Ads in Google Cloud. The company intends to expand these applications and make them accessible to the public later this year. Meanwhile, the TPU v5p chip is now generally available via Google’s cloud services.

Google has not disclosed whether it collaborated with design partners for Axion or if Broadcom, a previous partner on TPU chips, was involved in this project. Nevertheless, these developments signal Google’s commitment to pushing the boundaries of AI and cloud computing, setting the stage for further advancements in the industry.

Baburajan Kizhakedath

Related News

Latest News

Latest News