Electronics and Semiconductors

New Method Takes Autonomous Robot from Tool to Partner on the Battlefield

16 July 2018

The U.S. Army Research Lab and Robotics Institute at Carnegie Mellon have teamed up to to quickly teach robots novel transversal behaviors, with minimal human interaction. Mobile robots that have been taught these behaviors can autonomously navigate tasks that human partners expect them to do.

One goal for the researchers was to create a reliable autonomous robot that can be a teammate to the solider, rather than a tool that the soldier has to operate. The robot uses learned intelligence to perceive reason and make decisions.

A small unmanned Clearpath Husky robot, which was used by ARL researchers to develop a new technique to quickly teach robots novel traversal behaviors with minimal human oversight. (Source: US Army)A small unmanned Clearpath Husky robot, which was used by ARL researchers to develop a new technique to quickly teach robots novel traversal behaviors with minimal human oversight. (Source: US Army)

"If a robot acts as a teammate, tasks can be accomplished faster and more situational awareness can be obtained," Wigness said. "Further, robot teammates can be used as an initial investigator for potentially dangerous scenarios, thereby keeping soldiers further from harm. This research focuses on how robot intelligence can be learned from a few human example demonstrations. The learning process is fast and requires minimal human demonstration, making it an ideal learning technique for on-the-fly learning in the field when mission requirements change."

The first investigations of the autonomous robot research were focused on the robot learning behaviors based on the robot’s visual perception of terrain and objects in its surrounding environment.

In the lab, robots were taught how to navigate areas in the given environment, while still staying on the side of the road or hidden by buildings. The robots can be given different tasks by leveraging inverse optimal control and optimal reinforcement learning. To demonstrate how this development operates, a human drove the robot on a given trajectory. The trajectory was then related to how the robot used visual terrain and object features to learn the function of its environment.

"The challenges and operating scenarios that we focus on here at ARL are extremely unique compared to other research being performed," Wigness said. "We seek to create intelligent robotic systems that reliably operate in warfighter environments, meaning the scene is highly unstructured, possibly noisy, and we need to do this given relatively little a priori knowledge of the current state of the environment. The fact that our problem statement is so different than so many other researchers allows ARL to make a huge impact in autonomous systems research. Our techniques, by the very definition of the problem, must be robust to noise and have the ability to learn with relatively small amounts of data."

Wigness explained that as the research continues, the focus will shift to more complex behaviors, including learning from more than visual perception features.

"Our learning framework is flexible enough to use a priori intel that may be available about an environment. This could include information about areas that are likely visible by adversaries or areas known to have reliable communication. This additional information may be relevant for certain mission scenarios, and learning with respect to these features would enhance the intelligence of the mobile robot."

The research explores how the behavior learning transfers between all mobile platforms. The research has been operated on a Clearpath Husky robot. This robot has a low-to-the-ground visual field-of-view.

"Transferring this technology to larger platforms will introduce new perception viewpoints and different platform maneuvering capabilities," Wigness said. "Learning to encode behaviors that can be easily transferred between different platforms would be extremely valuable given a team of heterogeneous robots. In this case, the behavior can be learned on one platform instead of each platform individually."

"The capability for the Next Generation Combat Vehicle to autonomously maneuver at optempo in the battlefield of the future will enable powerful new tactics while removing risk to the Soldier," John Rogers, ARL researcher, said, "If the NGCV encounters unforeseen conditions which require teleoperation, our approach could be used to learn to autonomously handle these types of conditions in the future.”

You can read the paper on the research here.



Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement