Intel has expanded on its existing technology portfolio to move, store and process data effectively by announcing a new category of discrete general-purpose GPUs optimized for AI and HPC convergence.
Intel has also unveiled its oneAPI, new additions to its data-centric silicon portfolio, extending its position in the convergence of high-performance computing (HPC) and artificial intelligence (AI) — at Supercomputing 2019.
Intel said oneAPI industry initiative will deliver a unified and simplified programming model for application development across heterogenous processing architectures, including CPUs, GPUs, FPGAs and other accelerators.
The launch of oneAPI represents millions of Intel engineering hours in software development and marks a game-changing evolution from today’s limiting, proprietary programming approaches to an open standards-based model for cross-architecture developer engagement and innovation.
“HPC and AI workloads demand diverse architectures, ranging from CPUs, general-purpose GPUs and FPGAs, to more specialized deep-learning NNPs, which Intel demonstrated earlier this month,” said Raja Koduri, senior vice president, chief architect, and general manager of architecture, graphics and software at Intel.
The foundation of Intel’s data centric strategy is the Intel Xeon Scalable processor, which today powers over 90 percent of the world’s Top500 supercomputers. Intel Xeon Scalable processors are the only x86 CPUs with built-in AI acceleration that are optimized to analyze the massive data sets in HPC workloads.
At Supercomputing 2019, Intel unveiled a new category of general-purpose GPUs based on Intel’s Xe architecture. Code-named “Ponte Vecchio,” this new high-performance, flexible discrete general-purpose GPU is architected for HPC modeling and simulation workloads and AI training.
Ponte Vecchio will be manufactured on Intel’s 7nm technology and will be Intel’s first Xe-based GPU optimized for HPC and AI workloads. Ponte Vecchio will leverage Intel’s Foveros 3D and EMIB packaging innovations and feature multiple technologies in-package, including high-bandwidth memory, Compute Express Link interconnect and other intellectual property.
Intel said the data-centric silicon portfolio and oneAPI initiative lays foundation for the convergence of HPC and AI workloads at exascale within the Aurora system at Argonne National Laboratory.
Aurora will be the first U.S. exascale system to leverage Intel’s data-centric technology portfolio, building upon the Intel Xeon Scalable platform and using Xe architecture-based GPUs, as well as Intel Optane DC persistent memory and connectivity technologies.
The compute node architecture of Aurora will feature two 10nm-based Intel Xeon Scalable processors (code-named Sapphire Rapids) and six Ponte Vecchio GPUs. Aurora will support over 10 petabytes of memory and over 230 petabytes of storage. Aurora will leverage the Cray Slingshot fabric to connect nodes across more than 200 racks.