Software major Microsoft is betting big on artificial intelligence related opportunities for enterprises and it will have 5,000 resources to focus on the AI business.
Microsoft, the US-based technology giant, said its AI and Research Group will have more than 5,000 computer scientists and engineers to focus on AI product efforts. Harry Shum, a 20-year Microsoft veteran, will head AI research.
Information Platform head David Ku, Cortana and Bing head Derrick Connell, and Ambient Computing head Vijay Mital will be part of the AI team.
The Microsoft AI and Research Group will encompass AI product engineering, basic and applied research labs, and New Experiences and Technologies (NExT).
Microsoft’s AI strategy revolves around 4 pillars:
Harness AI to fundamentally change human and computer interaction through agents such as Microsoft’s digital personal assistant Cortana
Infuse every application, from the photo app on people’s phones to Skype and Office 365, with intelligence
Make these same intelligent capabilities that are infused in Microsoft’s apps —cognitive capabilities such as vision and speech, and machine analytics — available to every application developer in the world
Build the world’s most powerful AI supercomputer with Azure and make it available to anyone, to enable people and organizations to harness its power
Microsoft aims to accelerate the delivery of AI capabilities to customers across agents, apps, services and infrastructure.
Satya Nadella, CEO of Microsoft, said: “At Microsoft, we are focused on empowering both people and organizations, by democratizing access to intelligence to help solve our most pressing challenges. To do this, we are infusing AI into everything we deliver across our computing platforms and experiences.”
Microsoft AI and Research Group is hiring for its labs and offices worldwide. You can find more information at Microsoft job for latest job positions.
Intel and AI
Intel is trying to grow AI via acquisitions. Intel is set to acquire Nervana Systems, a technology company headquartered in San Diego, California. Nervana has a fully-optimized software and hardware stack for deep learning. Their IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI.
Intel said its processors power >97 percent of servers deployed to support machine learning workloads. Intel claims that its Xeon processor E5 is a widely deployed processor for deep learning inference and the recently launched Intel Xeon Phi processor delivers the scalable performance needed for deep learning training.