When studying the world, people rely on vision and touch: combining these feelings, they understand what object they see and hold in their hands — unlike robots under the control of artificial intelligence. A team of researchers from the Massachusetts Institute of technology (MIT) began developing a system that will help robots overcome this limitation and learn to "feel" objects by touch.
Mit engineers have developed a GelSight sensor that creates tactile signals based on the image of objects and predicts to which object and in which part of it the robotic arm manipulator touches. For training of system the webcam by means of which about 12 000 rollers showing various things were recorded was used. The resulting recordings, after being broken down into frames, made up a database of more than 3 million images compared to tactile signals.
According to scientists, at the current stage of development, the robot can imagine the feeling of touching a flat surface or a sharp edge. The head of the team of researchers Yunzhu Li argues: touching the environment, the model can predict the interaction with the subject of purely tactile sensations. He also noted that if you combine the two senses (vision and touch), you can expand the ability of the robot and reduce the amount of data needed to perform tasks that are associated with the manipulation and capture of objects.
In the future, the researchers hope to expand the capabilities of the model to improve the accuracy of interactions with the environment.