Attentional Behaviors for Environment Modeling by a Mobile Robot

Abstract

Building robots capable of interacting in an effective and autonomous way with their environments requires to provide them with the ability to model the world. That is to say,the robot must interpret the environment not as a set of points, but as an organization of more complex structures with human-like meaning. Among the variety of sensory inputs that could be used to equip a robot, vision is one of the most informative ones. Through vision, the robot can analyze the appearance of objects. The use of stereo vision also gives the possibility to extract spatial information of the environment, allowing to determine the structure of the different elements composing it. However, vision suffers from some limitations when it is considered in isolation. On one hand, cameras have a limited field of view that can only be compensated through camera movements. On the other hand, the world is formed by non-convex structures that can only be interpreted by actively exploring the environment.Hence, the robot must move its head and body to give meaning to perceived elements composing its environment.The combination of stereo vision and active exploration provides a means to model the world. While the robot explores the environment perceived regions can be clustered, forming more complex structures like walls and objects on the floor. Nevertheless, even in simple scenarios with few rooms and obstacles, the robot must be endowed with different abilities to successfully solve the task. For instance, during exploration, the robot must be able to decide where to look at while selecting where to go, avoiding obstacles and detecting what is that it is looking at. From the point of view of perception, there are different visual behaviors that take part in this process, such as those related to look towards what the robot can recognize and model, or those dedicated to maintain itself within safety limits. From the action perspective,the robot has to move in different ways depending on internal states (i.e. the status of the modeling process) and external situations (i.e. obstacles in the way to a target position). Both perception and action should influence each other in such a way that deciding where to look at depends on what the robot is doing, but also in a way that what is being perceived determines what the robot can or can not do.Our solution to all these questions relies heavily on visual attention. Specifically, the foundation of our proposal is that attention can organize the perceptual and action processes by acting as an intermediary between both of them. The attentional connection allows, on one hand, to drive the perceptual process according to the behavioral requirements and, on the other hand, to modulate actions on the basis of the perceptual results of the attentional control. Thus, attention solves the where to look problem and, additionally, attention prevents behavioral disorganization by limiting possible actions than can be performed in a given situation. Based on this double functionality, we have developed an attention-based control scheme that generates autonomous behavior in a mobile robot endowed with a 4 dof’s(degrees of freedom) stereo vision head. The proposed system is a behavioral architecture that uses attention as the connection between perception and action. Behaviors modulate the attention system according to their particular goals and generate actions consistent with the selected focus of attention. Coordination among behaviors emerges from the attentional nature of the system, so that the robot can simultaneously execute several independent, but cooperative, behaviors to reach complex goals. In this paper, we apply our control architecture to the problem of environment modeling using stereo vision by defining the attentional and behavioral components that provide the robot with the capacity to explore and model the world.

Publication DOI: https://doi.org/10.5772/17904
Divisions: College of Engineering & Physical Sciences
Additional Information: © 2011 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license.
ISBN: 978-953-307-837-3
Last Modified: 27 Dec 2023 10:10
Date Deposited: 16 Jun 2021 14:06
Full Text Link:
Related URLs: https://www.int ... -a-mobile-robot (Publisher URL)
PURE Output Type: Chapter (peer-reviewed)
Published Date: 2011-07-19
Authors: Bachiller, Pilar
Bustos, Pablo
Manso, Luis (ORCID Profile 0000-0003-2616-1120)

Download

Export / Share Citation


Statistics

Additional statistics for this record