Industrial Electronics

Video: New Sensor Technology Gives Robots Greater Touch Sensitivity

05 June 2017

A robotic gripper with the GelSight sensor was able to grasp a small screwdriver removing it from and inserting it back into a slot. (Source: MIT)A robotic gripper with the GelSight sensor was able to grasp a small screwdriver removing it from and inserting it back into a slot. (Source: MIT)Two teams from MIT are working with a new sensor technology called GelSight that uses physical contact with an object to give a detailed 3-D map of its surface in order for robots to have greater sensitivity and dexterity.

One MIT team is working to use the data from the GelSight sensor to enable a robot to judge the hardness or surfaces it touches, something needed for household robots to handle everyday objects. The other team is looking at enabling robots to manipulate smaller objects, a difficult task for current machines.

The sensor consists of a block of transparent rubber with one face coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape. The metallic paint makes the object’s surface reflective so that the geometry becomes easier for computer vision algorithms to understand. Opposite of the painted face are three colored lights and a single camera.

“[The system] has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer … can figure out the 3-D shape of what that thing is,” says Ted Adelson, a professor at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

In both projects, the GelSight sensor was mounted on one side of a robotic gripper.

In the hardness project, researchers used confectionery molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness. The GelSight sensor was pressed against each object manually and recorded how the contact pattern changed over time, producing a short movie for each object. Five frames from each movie that were evenly spaced in time were used to standardize the data format and keep the size of the data manageable.

The data was then fed into neural network, which looked for changes in contact patterns and hardness measurements. The result was a system that produced hardness scores with very high accuracy. Informal tests with human subjects involving fruits and vegetables was conducted and in every instance what the humans ranked as the hardest object correlated to what the GelSight-equipped robot came up with.

In the smaller object manipulation project, MIT researchers designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand. In one experiment, the gripper had to grasp a small screwdriver, remove it from a holster and return it. As long as the vision system’s estimate of the screwdriver’s initial position was accurate to within a few centimeters, the algorithms could deduce which part of the screwdriver the GelSight sensor was touching and determine its position in the robot’s hand.

Researchers believe tactile sensors such as GelSight combined with deep learning and computer vision will have a big impact on robotics in the near future to make robots more capable and understanding of how they sense and control objects.

To contact the author of this article, email PBrown@globalspec.com


Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement