Hijacking a home robot with Watson API
I recently presented at the 2016 Watson Developer Conference (WDC) in San Francisco on November 9 and 10. It was a mind-blowing event that brought together deep-learning experts and developers. I’m passionate about robots, and with the help of Watson API cognitive services, you can easily add AI wits to any robot out there.
It was my initial idea to hijack a typical STEM robot and plug a couple of cognitive Watson API services in there. I decided to apply for the WDC to present a robot powered with AI over the cloud with connectivity to smart devices in order to show that everyone can take advantage of a robot teleoperation supported by cognitive analysis.
My workshop application for WDC was accepted, so I started to describe the solution of my quick prototype. I added the visual analysis for the remotely requested pictures. I also demonstrated how easy it is to deliver such a functionality with Watson services hosted on Bluemix. I used a common IoT device, Raspberry Pi, for the interface to a robot platform and picture acquisition.
Adding the camera to my Raspberry Pi.
I used two channels to invoke actions on the robot. One was texting over Twilio API, a mobile service. I was able to send messages to the robot, which was taking pictures and classifying what was on them with the Watson Visual Recognition service.
The picture taken by Raspberry Pi during the WDC workshop.
One of the fundamental communication services the entire demo relied on was the Watson IoT platform. This service was used to send messages and commands to and from the IoT platform, receiving them on the cloud and from the mobile device. This service reuses MQTT as a transport layer. The schematic chart of the service can be found below:
Flows were developed in Node.Red on the Raspberry Pi and in the IoT Boilerplate on Bluemix.
This chart shows the schematic of sending the messages between platforms, while the actual demo also uses the Twilio API service to trigger the action and send the picture with its Watson API classification analysis.
The other service used in the demo was Cloudant NoSQL DB, which was storing JSONs with the encoded pictures. Furthermore, an iOS application written in Swift allowed a robot to be controlled from anywhere over the cloud and internet. Think about having your own DIY teleoperated robot and even robot security, a robot vacuum, a robot lawnmower and so on.
In addition to my fast prototyping with Node.Red, Shantenu Agarwal, an IBM offering manager, and Russ Potapinski, the head of cognitive sciences from Woodside Energy, presented the experimental product “Project Intu.” The presenters showcased the massive enterprise-focused approach to the same problem — enabling cognitive services at the IoT device.
I am sure that with Node.Red, Watson and MobileFirst services, you could easily construct a minimum viable prototype for a startup, while with Project Intu, you could deploy an enterprise-wide solution. Finally, I was also an instructor for my mobile labs during the 2016 conference, which you can try.
If you want to learn more about connecting mobile devices and robots, then I have good news for you. Together with the IBM developerWorks team and Dr. Lennart Frantzell from IBM Ecosystem Development, I have been developing a Massive Open Online Course (MooC) on how to connect the mobile device to a robot and enhance its functionality with Watson cognitive services. The code for this MooC is available on GitHub.
To follow my other projects, follow me on Twitter: @blumareks