Researchers and roboticists from Queensland University of Technology (QUT) have developed a faster and more accurate way for robots to grasp objects in a vast range of settings. The new method allows robots to grab items in a cluttered area and even objects that are moving and changing. This new development makes robots even better for both industrial and domestic locations.
With the new technique, the robot scans its environment and maps each pixel. This allows the robot to gather information about the object, area depth and the quality of the surroundings before figuring out which grasp would be best to pick up the item. The new approach has accuracy rates of 85 percent for dynamic grasping and 92 percent in a static environment. The approach is based on the generative grasping convolutional neural network (GGCNN).
Grasping has been difficult for robots to do in the past because they have to test many grasping techniques before finding the right one. They have also struggled with picking up a previously grasped item when that item has been moved or changed in some way. In both industrial and home environments, objects will not always be in the same area or even in the same position. Robots need to be able to adapt to new movements no matter what environments they are in.
The new method is a real-time, object-independent grasp synthesis for closed-loop grasping, and fixes or improves upon all of the issues previously mentioned. It opens new doors for robots in many industries and areas, ranging from factories to the home.