Praying mantises equipped with tiny 3D eyeglasses revealed that the insects have a form of stereo vision not previously discovered in any animal. Researchers see potential future applications in robot vision.
A team of Newcastle University scientists and engineers have been studying praying mantis vision to compare insect and human 3D vision. Many animals besides humans possess stereoscopic vision; the praying mantis is the only insect known to have this trait.
The test procedure included crafting insect 3D glasses and fixing them to the creatures with beeswax. The insect test subjects then attended a special insect cinema, presenting both prey images and dot-patterns used to test stereoscopic vision in humans. This latter test allowed the team to compare human and mantis vision directly.
Stereo vision in humans is a function of the slightly different images each eye receives; the difference between images translates to the distance between the observer and the scene being observed – first-order local motion. The researchers discovered that stereoscopy in their insect guinea pigs does not compare static differences in images but rather temporal differences, looking for areas where images change – second-order motion. The mantis vision system perceives depth only through this system.
Another surprising result was the discovery that the insect’s unique form of vision works better than human 3D vision. Even when the researchers sent completely different images to each mantis eye, the insect could still accurately see and attack the prey image.
“This is a completely new form of 3D vision as it is based on change over time instead of static images,” said behavioral ecologist Dr. Vivek Nityananda at Newcastle University. “In mantises it is probably designed to answer the question ‘is there prey at the right distance for me to catch?’”
What does mantis vision have to do with robotic vision? Tiny praying mantis brains must have highly efficient algorithms for processing visual signals. Robotic stereo vision is based on the complex human model. Changing out these algorithms for a system that works with drastically reduced processing could lead to low-power autonomous robots.
The researchers published their results in Current Biology.