Meta Unveils AI Accelerator Chip to Bolster Computing Capabilities

Meta Platforms, formerly known as Facebook, has revealed details about its latest in-house artificial intelligence accelerator chip, marking a significant development in the company’s efforts to bolster its computing capabilities.

Meta MTIA chipset for AI

Meta’s custom data center chip is aimed at addressing the increasing demand for computing power to support AI-driven products across platforms like Facebook, Instagram, and WhatsApp. Internally known as “Artemis,” this chip is designed to reduce Meta’s reliance on Nvidia’s AI chips and lower energy costs.

Meta jobs in AI innovation

The Meta Training and Inference Accelerator (MTIA) chip represents a significant milestone in Meta’s broader custom silicon initiative. Apart from chip development, Meta has been investing heavily in software development to maximize the efficiency of its infrastructure.

“This chip’s architecture is fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity for serving ranking and recommendation models,” Meta stated in a blog post, emphasizing the chip’s optimized design for AI workloads.

The latest iteration of MTIA represents a substantial advancement over its predecessor, with more than double the compute and memory bandwidth while maintaining a seamless integration with Meta’s workload requirements. Specifically engineered to efficiently handle ranking and recommendation models crucial for delivering high-quality user recommendations, the chip’s architecture emphasizes a delicate balance of compute power, memory bandwidth, and capacity.

At the core of the accelerator lies an 8×8 grid of processing elements (PEs), delivering increased dense and sparse compute performance. This performance boost stems from architectural improvements, particularly in pipelining sparse compute operations, coupled with significant enhancements in local PE storage, on-chip SRAM size and bandwidth, and LPDDR5 capacity.

Moreover, the revamped MTIA design features an upgraded network on chip (NoC) architecture, doubling the bandwidth and facilitating seamless coordination among different processing elements with minimal latency.

To support the deployment of next-generation silicon, Meta has developed a robust rack-based system capable of accommodating up to 72 accelerators. This scalable system, comprising three chassis housing 12 boards each, has been meticulously engineered to operate at higher clock speeds and power levels compared to its predecessor, ensuring denser capabilities and improved performance across a diverse range of AI models.

In tandem with hardware advancements, Meta has also prioritized software optimization, leveraging its expertise in PyTorch development to seamlessly integrate with MTIA. The MTIA stack, fully compatible with PyTorch 2.0, incorporates innovative features like TorchDynamo and TorchInductor, facilitating efficient model deployment and execution.

Preliminary results showcase a remarkable 3x performance improvement over the first-generation chip across key models evaluated. At the platform level, the enhanced MTIA system, combined with a powerful 2-socket CPU configuration, delivers a sixfold increase in model serving throughput and a 1.5x improvement in performance per watt, underscoring Meta’s commitment to optimizing AI workloads.

With MTIA already deployed in data centers and serving models in production, Meta is witnessing tangible benefits, enabling the allocation of additional compute resources to intensive AI tasks. The MTIA chip emerges as a complementary solution to commercially available GPUs, offering Meta the optimal balance of performance and efficiency tailored to its specific AI requirements.

Meta’s commitment to enhancing its computing capabilities is evident in its substantial investments in hardware procurement. CEO Mark Zuckerberg revealed plans to acquire approximately 350,000 flagship H100 chips from Nvidia this year, along with additional chips from other suppliers, totaling the equivalent of 600,000 H100 chips.

Taiwan Semiconductor Manufacturing Co (TSMC) will manufacture the new MTIA chip using its advanced “5nm” process. Meta claims that the chip offers three times the performance of its first-generation processor, Reuters news report said.

The deployment of the MTIA chip in data centers marks the beginning of its engagement in serving AI applications. Meta has outlined plans to expand the scope of the MTIA chip, including support for generative AI workloads, through various ongoing programs.

Baburajan Kizhakedath

Related News

Latest News

Latest News