IBM’s Project Intu extends Watson functions to any device

ibm watson

IBM today announced its plan to extend Watson capabilities to any device.

The IT major unveiled the experimental release of the new platform Project Intu, which allows developers to embed Watson functions into various end-user device form factors.

The company informed that Project Intu, in its experimental form, is now accessible via the Watson Developer Cloud and also available on Intu Gateway and GitHub.

According to IBM, Project Intu simplifies the process for developers wanting to create cognitive experiences in various form factors such as spaces, avatars, robots or other IoT devices. Furthermore, it extends cognitive technology into the physical world.

The platform enables devices to interact more naturally with users, triggering different emotions and behaviors and creating more meaningful and immersive experience for users.

Developers can simplify and integrate Watson services, such as Conversation, Language and Visual Recognition, with the capabilities of the “device” to, in essence, act out the interaction with the user.

Instead of a developer needing to program each individual movement of a device or avatar, Project Intu makes it easy to combine movements that are appropriate for performing specific tasks like assisting a customer in a retail setting or greeting a visitor in a hotel in a way that is natural for the end user, IBM conveyed.

A recent IDC research found that  by 2018, 75 percent of developer teams will include cognitive or AI capabilities in one or more of their applications or services.

The research also envisages that by 2019, 40 percent of all digital initiatives, and 100 percent of all IoT efforts, will be supported by cognitive or AI technologies.

Project Intu offers developers a ready-made environment on which to build cognitive experiences running on a wide variety of operating systems – from Raspberry PI to MacOS, Windows to Linux machines, to name a few.

As an example, IBM has worked with Nexmo, the Vonage API platform, to demonstrate the ways Intu can be integrated with both Watson and third-party APIs to bring an additional dimension to cognitive interactions via voice-enabled experiences using Nexmo’s Voice API’s support of websockets.

“IBM is taking cognitive technology beyond a physical technology interface like a smartphone or a robot toward an even more natural form of human and machine interaction,” said Rob High, IBM Fellow, VP and CTO, IBM Watson.

“Project Intu allows users to build embodied systems that reason, learn and interact with humans to create a presence with the people that use them – these cognitive-enabled avatars and devices could transform industries like retail, elder care, and industrial and social robotics.”

Related News

Latest News

Latest News