Artificial intelligence has learned to predict tactile sensations in the eye

Artificial intelligence has learned to predict tactile sensations in the eye

Researchers from the Laboratory of Informatics and Artificial Intelligence at the Massachusetts Institute of Technology (MIT CSAIL) have taught the neural network to predict the tactile sensations that may arise from touching it, and vice versa. According to The Next Web, the researchers briefly described their work at the CVPR-2019 conference, which started in Long Beach, California on June 16.

Despite the active development of machine vision and touch, modern robots are still limited in their interaction with the outside world. It is assumed that learning machines to predict tactile sensations from interacting with objects or predicting their appearance based on such sensations, will allow robots to manipulate objects faster, safer and more accurately.

For the neural network training, researchers from MIT CSAIL used the industrial robot Kuka, whose manipulator was equipped with the GelSight tactile system. It is a sensor with a camera, a multi-color LED backlight and a gel layer.

When interacting with objects, the gel layer is deformed. These deformations are highlighted by LEDs and recorded by the camera. Subsequently, on the basis of several deformation frames, the system constitutes a three-dimensional model of the part of the object with which the GelSight system was in contact.

When training, the Kuka robot touched various objects, collecting data on tactile sensations from such touches. Simultaneously, a video recording of such touches was made. In total, the Kuka robot overtook 200 different household items. The total number of touches was 12 thousand.

Then, based on the records received, the researchers compiled a database containing 3 million frames from video records associated with tactile data. The database researchers called VisGel. This base was later used to train the neural network.

In March this year, American engineers presented an algorithm under which a robotic manipulator is able to recognize unfamiliar objects, classify their main components and interact with them. For example, a robot is able to recognize a mug, identify its handle and hang a mug on the dryer.

Source: The Next Web