MIT’s Cheetah 3 robot can leap and gallop across terrain and climb trash and debris-filled stairs, all while maintaining its balance. The Cheetah 3 doesn’t have cameras that allow it to see. It uses “blind locomotion” to make its way through its surroundings. The 90 lb. mechanical robot is the size of a full grown Labrador retriever and is related to MIT’s Spot robot.
"There are many unexpected behaviors the robot should be able to handle without relying too much on vision," said the robot's designer, Sangbae Kim, associate professor of mechanical engineering at MIT. "Vision can be noisy, slightly inaccurate, and sometimes not available, and if you rely too much on vision, your robot has to be very accurate in position and eventually will be slow. So we want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast."
The MIT team believes that the Cheetah 3 could do tasks that are too dangerous or inaccessible for humans, like power plant inspection. The Cheetah 3 is able to make its way up through unstructured terrain and other difficult areas due to two algorithms that were designed by the MIT researchers: a contact-detection algorithm and a model-predictive control algorithm.
The contact-detection algorithm helps the robot determine when it should switch from swinging its leg in the air and stepping it on the ground. The algorithm figures out the leg transition by constantly calculating one of three leg probabilities: the probability of contact with the ground; the probability of the force that will be generated when the leg hits the ground; and when the probability of where the leg will land when in mid-swing. It calculated the probabilities based on data from gyroscopes, accelerometers and the joint positions of the robot’s legs. The team tested this algorithm by placing Cheetah 3 on a treadmill that was covered in debris and the robot had to navigate through the obstacles.
"If humans close our eyes and make a step, we have a mental model for where the ground might be and can prepare for it. But we also rely on the feel of touch of the ground," Kim said. "We are sort of doing the same thing by combining multiple [sources of] information to determine the transition time."
The blind locomotion of Cheetah 3 is partly due to the model predictive algorithm. This algorithm predicts how much force one of the legs should apply once it has committed to taking a step. It calculates the positions of the robot’s legs and body in half a second and makes calculations for each leg every 50 milliseconds.
"The contact detection algorithm will tell you, 'this is the time to apply forces on the ground,” Kim said. "But once you're on the ground, now you need to calculate what kind of forces to apply so you can move the body in the right way." "Say someone kicks the robot sideways. When the foot is already on the ground, the algorithm decides, 'How should I specify the forces on the foot? Because I have an undesirable velocity on the left, so I want to apply a force in the opposite direction to kill that velocity. If I apply 100 newtons in this opposite direction, what will happen a half second later?"
The model predictive algorithm was tested in the lab by Cheetah 3 being kicked and pushed repeatedly by the researchers while it climbed stairs.
"We want a very good controller without vision first," Kim said. "And when we do add vision, even if it might give you the wrong information, the leg should be able to handle (obstacles). Because what if it steps on something that a camera can't see? What will it do? That's where blind locomotion can help. We don't want to trust our vision too much."
The robot will be presented in October at the International Conference on Intelligence in Madrid. Read more about the robot on the MIT site here.