Acquired Electronics360

Industrial Electronics

New Research Tests Teleoperating Robots With Virtual Reality

04 October 2017

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a virtual-reality (VR) system that allows users to teleoperate a robot using an Oculus Rift headset.

The system embeds the user in a VR control room with multiple sensor displays, making it feel like they are inside the robot’s head. Using gestures, users can match their movements to the robot’s movements in order to complete various tasks.

VR system from Computer Science and Artificial Intelligence Laboratory could make it easier for factory workers to telecommute. (Jason Dorfman, MIT CSAIL)VR system from Computer Science and Artificial Intelligence Laboratory could make it easier for factory workers to telecommute. (Jason Dorfman, MIT CSAIL)

"A system like this could eventually help humans supervise robots from a distance," says CSAIL postdoctoral associate Jeffrey Lipton, who was the lead author on a related paper about the system. "By teleoperating robots from home, blue-collar workers would be able to telecommute and benefit from the IT revolution just as white-collars workers do now."

The researchers believe that this system could help employ increasing numbers of jobless video-gamers by “game-ifying” manufacturing positions.

The team demonstrated their VC control approach with the Baxter humanoid robot from Rethink Robotics. They said that this approach can work on other robot platforms and is compatible with the HTC Vive headset.

Lipton co-wrote the paper with CSAIL director Daniela Ruse and researcher Aidan Fay.

Traditionally, there have been two main approaches to using VR for teleoperation.

In a ‘direct’ model, the user’s vision is directly coupled to the robot’s state. With these types of systems, a delayed signal could lead to nausea and headaches. The user’s viewpoint is also limited to one perspective.

In the ‘cyber-physical’ model, the user is desperate from the robot. The user interacts with a virtual copy of the robot and the environment. This requires more data and specialized spaces.

The CSAIL team’s system is basically a halfway point between these two methods. It solves the delay problem since the user is constantly receiving visual feedback from the virtual world. It also fixes the cyber-physical issue of being distinct from the robot: once a user puts on the headset and logs into the system, and they will feel like they are inside the robot’s head.

The system mimics the ‘homunculus model of mind’- the idea that there is a small human inside the brain collecting, our actions, viewing the images we see and understand them for us. While it is a strange idea for humans, it fits for robots.

Using Oculus’ controllers, users can interact with controls that appear winning the virtual space to open and close the hand grippers to pick up, move and retrieve items. A user can plan movements based on the distance between the arm’s location maker and their hand while looking at a live display of the arm.

To make movements possible, the human’s space is mapped into the virtual space and the virtual space is then mapped into the robot space to prove a sense of co-location.

The system is more flexible when compared to previous systems. Other systems might extract 2D information from each camera, built out of a full 3D model of the environment and then process and redisplay the data.

In contrast, the CSAIL team’s approach bypasses all of this by taking the 2D images that are displayed to each eye.

To test the system, the team teleoperated Baxter to do simple tasks like picking up screws or stapling wires. Then they had to test users teleoperating the robot to pick up and stack blocks.

Users successfully completed tasks at a higher rate compared to the direct model. Users with gaming experience had an easier time operating this system.

When tested against the state-of-the-art system, CSAIL’s system was better at grasping objects 95% of the time and was 57% faster at doing tasks. The team showed that the system could pilot the robot from hundreds of miles away, testing it on a hotel’s wireless network in Washington, D.C. to control Baxter at MIT.

"This contribution represents a major milestone in the effort to connect the user with the robot's space in an intuitive, natural, and effective manner," says Oussama Khatib, a computer science professor at Stanford University.

The team wants to focus on making the system more scalable with many users and robots that can be compatible with current automation technologies.

A paper on this research will be presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver.

To contact the author of this article, email Siobhan.Treacy@ieeeglobalspec.com


Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the Engineering360
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement

CALENDAR OF EVENTS

Date Event Location
23-27 Apr 2018 Oklahoma City, Oklahoma
18-22 Jun 2018 Honolulu, Hawaii
Find Free Electronics Datasheets
Advertisement