AMD announces MI6, MI8, and MI25 machine intelligence accelerators

AMD Radeon
AMD announced Radeon Instinct, a new suite of hardware and open-source software offerings, at its Technology Summit held last week.

According to the Sunnyvale-based company, these offerings are designed to increase performance, efficiency and ease of implementation of deep learning workloads.

Technology experts see this product as an alternative to Nvidia’s Tesla line of products.

Radeon Instinct comes with a promise to offer organizations GPU-based solutions for deep learning inference and training.

Radeon Instinct accelerators feature passive cooling, AMD MultiGPU (MxGPU) hardware virtualization technology conforming to the SR-IOV (Single Root I/O Virtualization) industry standard, and 64-bit PCIe addressing with Large Base Address Register (BAR) support for multi-GPU peer-to-peer support.

Three new Radeon Instinct products scheduled for release in 2017 are MI6, MI8 and MI25 accelerators.

The MI6 accelerator is based on Polaris GPU architecture and will be a passively cooled inference accelerator optimized for jobs/second/Joule with 5.7 TFLOPS of peak FP16 performance at 150W board power and 16GB of GPU memory.

The MI8 accelerator is based on “Fiji” Nano GPU. It is a small form factor HPC and inference accelerator with 8.2 TFLOPS of peak FP16 performance at less than 175W board power and 4GB of High-Bandwidth Memory (HBM).

The third, the MI25 accelerator, will use AMD’s Vega GPU architecture and is designed for deep learning training, optimized for time-to-solution. AMD has not yet launched the Vega architecture.

In addition to hardware offerings, Radeon Instinct offers a free, open-source library called MIOpen. It helps solve high-performance machine intelligence implementations when becomes available in Q1 201.

Apart from that, AMD offers ROCm deep learning frameworks. The ROCm platform is also now optimized for acceleration of popular deep learning frameworks, including Caffe, Torch 7, and Tensorflow, allowing programmers to focus on training neural networks rather than low-level performance tuning through ROCm’s rich integrations.

ROCm is intended to serve as the foundation of the next evolution of machine intelligence problem sets, with domain-specific compilers for linear algebra and tensors and an open compiler and language runtime.

AMD is also investing in developing interconnect technologies that go beyond today’s PCIe Gen3 standards to further performance for tomorrow’s machine intelligence applications.

The company’s President and CEO, Lisa Su believes that Radeon Instinct will dramatically advance the pace of machine intelligence.

Related News

Latest News

Latest News