Want to work from home? If you’re a factory worker operating machinery, this probably isn’t possible. But virtual reality could change that.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have presented a virtual-reality (VR) system that allows a user to teleoperate a robot, using an Oculus Rift headset. By embedding the user in a VR control room with multiple sensor displays, a user can complete various tasks by matching his or her movements to the robot, via hand controllers.
By “game-ifying” manufacturing positions in this way, such a system might help to employ the increasing number of video-gamers without jobs.
“A system like this could eventually help humans supervise robots from a distance,” said Jeffrey Lipton, a CSAIL postdoctoral associate and lead author on a related paper about the system. “By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now."
VR teleoperation has traditionally taken two main approaches. In a “direct” model, the user's vision is directly coupled to the state of the robot. There are a couple of drawbacks to this approach: The user’s viewpoint is limited to one perspective, and a delayed signal can lead to nausea and headaches. By contrast, the “cyber-physical” model separates the user from the robot; the user interacts with a virtual copy of the robot and the environment. This approach, however, requires specialized spaces – and much more data.
The CSAIL team’s approach is somewhere in between. Wearing the headset allows the user to feel as if they’re inside the robot’s head, while constant visual feedback from the virtual world solves the delay problem. To provide a sense of co-location, the human’s space is mapped into the virtual space – and the virtual space is then mapped into the robot space. The Oculus controllers allow interaction with controls in that virtual space. Users can open and close the hand grippers to pick up, move and retrieve items. And, instead of extracting 2D information from each camera and building out a full 3D model that can be redisplayed, as other systems might do, the system simply takes the 2D images displayed to each eye. The human brain does the rest, filling in 3D information through inference.
It’s akin to the “homunculus model” of mind – the idea that a small human inside the brain controls the actions of the body, viewing and interpreting images from the outside.
In tests of the system, users were able to successfully complete tasks at a much higher rate compared to the “direct” model. Perhaps not surprisingly, those with gaming experience found the system much easier to use.
Using a hotel’s wireless network in Washington, D.C., to control the robot back at MIT, the team also showed that the robot could be piloted from hundreds of miles away.
The team eventually wants to focus on scalability, with many users and different types of robots compatible with current automation technologies. For the project, which was funded in part by Boeing and the National Science Foundation, the team used the Baxter humanoid robot from Rethink Robotics. They have said that other robot platforms would also work, and that the system is also compatible with the HTC Vive headset.