Acquired Electronics360

MEMS and Sensors

How Will Social Robots Navigate?

30 September 2016

When we Mars Rover Spirit was an early mobile robot that tackled an environment much different than that of a cluttered and often-changing home. Image credit: NASAMars Rover Spirit was an early mobile robot that tackled an environment much different than that of a cluttered and often-changing home. Image credit: NASApicture our lives in tomorrow’s smart cities, it is likely that we will jump into a car—not necessarily our own—to take us to where we need to go, and have a social robot that does belong to us, to assist us in navigating our lives. Robots have not only progressed physically, but also they are smarter, sleeker, can truly learn, are available 24/7, and most likely will gain status higher on the food chain than the family dog and cat.

However what will it take to get them there? Used increasingly in industrial settings, robots are already learning much of what we will need them to know. One question, however, is how good will they be at navigating our homes?

Robbie moves in

Robots, the social variety, will take over the mundane aspects of our lives. However, as it packs its bags and moves in, there are many considerations involving navigation in an indoor environment that are now just beginning to be solved. For example:

  • Navigation can involve internal or external information—what happens when that information is missing?
  • Navigation relies on sensors that sometimes involve the challenges of drift. Relative positioning measurement is not always equal to absolute positioning. What will that mean for the dead reckoning on our robot pal as it maneuvers through our home?
  • What effect will typical homes have on a robot’s ability to navigate? Considerations include stairs, uneven flooring, floor coverings, and so forth.

Navigation is a critical element of our eventual robotic pals once operations surpass the fixed and structured environments of the factory floor.

Getting a sense of the issues

Navigating, or moving from one point to another, requires a myriad of physical principles and as many sensing solutions. Types of sensors used include:

Inertial sensors

Inertial measurements help determine linear accelerations and angular velocities. Inertial navigation systems obtain velocity and position via inertial sensor measurement. The basis of this is dead reckoning. Typically, three accelerometers are used to measure acceleration along three orthogonal axes. Outputs are integrated multiple times to determine position. Three gyroscopes measure rotation rates. Today, strap-down systems are common, whereby all sensors are fixed to the robot and the gyro data is used to transform accelerometer data to a navigation frame of reference. Today these strap-down systems feature greater accuracy and reliability at a lower cost, rely on less power, and are compact and lightweight. These low-cost systems are suitable in performance for mobile robots.

Adding vision

Inertial data is also used in autonomous visual systems as well as head stabilization, posture control and body equilibrium. With active vision systems, inertial data provides information for image stabilization. The information it provides is necessary whenever horizontal or vertical is important in an autonomous system.

Negotiating the unknown

Range sensors are important for navigating unstructured and unknown environments, specifically to avoid obstacles, identify possible routes and detect established landmarks. Range sensing can be accomplished with magnetic, inductive, capacitive, ultrasound, microwave and optical techniques. Ultrasound and optical range sensors are common in mobile robotics. Ultrasound sensors are low cost and have an easy interface, however, challenges include low spatial resolution, crosstalk, errors because of multiple specular reflections, and low acquisition rates. Optical range sensors provide real-time and accurate measurements with high spatial resolution.

Reactive collision detection

Reactive collision or obstacle avoidance for mobile robot navigation within a dynamic environment can be based on many approaches that are often similar. For example, the Virtual Force Field approach is a real-time avoidance system based on certainty grids and potential fields, while a Dynamic Window reactive obstacle avoidance approach takes into consideration kinematic and dynamic constraints of the robot. The Nearness Diagram offers a divide-and-conquer strategy that simplifies navigation where it can be particularly troublesome. Another is the modified beam curvature method, used to predict collision combined with a reactive approach for fast obstacle avoidance using a situated-activity paradigm along with divide-and-conquer strategies. Another proposed method is the Virtual Semi-Circles method, used in cluttered, dense and complex environments. It integrates four separate capabilities: division, evaluation, decision and motion generation. The number of possible approaches gives credence to how complex navigation is, and how many are trying to solve it.

A cloud approach

Robots will need more than the existence of sensors to avoid tripping on the living room rug and running into the dog, and to move efficiently through untold barriers. The amount of data necessary to be stored within the robot is huge and getting larger. Storage and search efficiency suffer.

Networked robots provide solutions, spreading process and control over remote and dedicated servers. However flexibility, scalability of resources, environment maps and other pertinent data can be stored elsewhere—in the cloud to be exact.

One potential solution, for example, comes from the BioRobotics Institute, Scuola Superiore Sant’AnnaPontedera, Italy. According to a paper published by the Institute in the International Journal of Social Robotics, cloud computing represents a possibility for use in consumer and assisted-living applications, exploiting user-centered interfaces, computational capabilities, on-demand provisioning and large data storage, QoS, scalability and flexibility.

The work shows that the cloud can provide the environmental maps and upload new maps when sharing knowledge. In addition, landmarks and environmental tags can be used to assist in both navigation and localization. In the proposed solution, environmental maps are divided into sub-maps and stored in cloud storage. The robot retrieves what maps it needs and when. Navigation is still onboard to provide safety. By using maps stored in the cloud, tags and landmarks, the robot becomes more aware of its environment without being pre-programmed to do so. By adding the cloud, there is the ability to share knowledge, which accelerates and simplifies the robot’s learning and mobility.

Robots, to be of any real use in the home, outdoor environments, or business settings outside of the factory, will need to rely on navigation where configuration and management is simplified and reliable. As artificial intelligence further prompts these robots to not only be autonomous, but decision-making entities, the volumes of data necessary for operation will continue to grow astronomically. The cloud-based solution represents a reasonable and maybe less-cluttered path for the robot to take.



Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the Engineering360
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement

CALENDAR OF EVENTS

Date Event Location
30 Nov-01 Dec 2017 Helsinki, Finland
23-27 Apr 2018 Oklahoma City, Oklahoma
18-22 Jun 2018 Honolulu, Hawaii
Find Free Electronics Datasheets
Advertisement