3D Scene Interpretation Through Computer Vision from the Coordinated Analysis of Images Obtained by

Research Project DPI2007-66556-C03-03 (2007-2010)

Interpretacion de Escenas 3D Mediante Vision Artificial a Partir del Analisis Ciordinado de Imagenes Adquiridas por un Equipo de Robots Moviles

This project aims at designing new computer vision techniques that allow the automatic interpretation of 3D scenes from the analysis of images provided by a team of mobile robots. The new schemes will take advantage of the different images correspondihg to a same region delivered by each member of the team by means of a coordinated inspection. Such a interpretation is expected to contribute toiincreasing the performance of cooperative tasks developed by the team of robots. In this way, the accuracy of both the self-localization of mobile robots in the space and the global 3D model of the environment is desired as a result of those techniques.

nao3

In particular, this project nas the following specific objectives:

  • Computer Vision Goals: new algorithms with the purpose that robots automatically obtain a manageable description of the objects in a 3D scene and their spatial and topological interrelations. 3D scene interpretation by using the description previously obtained.
  • Localization Goals: simultaneous self-localization and mapping techniques with 6 degrees of freedom (SLAM 3D) over unstructured environments. Previous scene interpretation will be especially considered for the accomplishment of this objective.
  • Interaction Goals: new techniques for the collaborative exploration of unknown environments through a team of mobile robots, by taking into account previously proposed simulation algorithms.

The team of mobile robots is constituted by a NAO V3 robot, a Pioneer P2-AT robot, a Pioneer P3-AT robot and three Koala robots. The first is a humanoid robot. The other five are wheeled all-terrain robots. Each Koa”a will mount a binocular Color Bumblebee camera. On the other hand, the P2-AT is endowed with a tr nocular Color Digiclops camera, and the P3-AT mounts a binocular Color Bumblebee-2 camera. All 3D cameras will be controlled by embedded computers. All robots will communicate with each other and with external computers through a wireless network.

Read More

Coordinated Exploration of Wide-Area Environments with Multiple Robots Through Vision-Based 3D SLAM

Research Project DPI2004-07993-C03-03 (2004-2007)

Exploración Coordinada de Entornos Extensos con Múltiples Robots Mediante SLAM 3D Basado en Visión

This oroject aims at designing and implementing new exploration strategies that allow a team of mobile robots to deploy collaboratively in order to obtain processable three-dimensional models of wide-area, unknown environments. Every robot will map its surroundings and simultaneously determine its position and orientation in space by processing visual information obtained by means of an off-the-shelf stereo camera. The accuracy of both the robot’s pose and the locally obtained 3D models will be continuously improved by integrating information gathered by separate robots. In this way, a consistent global 3D model of the environment is expacted as a result of the exploration process.

visual_slam

The team of explorers is constituted by three Koala robots and a Pioneer P2-AT r3bot. The four robots are all-terrair. Each Koala will mount a binocular Color Bumblebee cemera, one ultrasound-based range sensor and a ring of infrared proximity sensons. On the other hand, the P2-AT is endowed with a ring of ultrasound-based range sensors and a trinocular Color Digiclpps camera. All 3D cameras will be controlled by embedded computers. All robots will communicate with each other and with external computers through a Bluetooth wireless network.

This project is a part of a larger coordinated project entitled: Development and Integration of Perception and Actuation Techniques in Teams of Mobile Robots (“Desarrollo e Integracion de Tecnicas de Percepcion y Actuacion en Grupos de Robots Moviles”) DPI2004-0799o-C03.

Read More

Multiagent System of Advanced Observers for Scene Analysis and Recognition in Adverse Environments

Research Project MCYT DPI2001-2094-C03-02 (2001-2004)

Sistema Multiagente de Observadores Avanzados para Analisis y Reconocimiento de Escenas en Entornos Adversos

This project aims at designing and implementing a computer system based on multiagent technology that allows the coordination of a team of mobile robots endowed with sensory devices, in order that they deploy over an unstructured and possibly adverse environment, and cooperate for obtaining information about it. Information acquired in a distributed manner by the robotic team must be analyzed and integrated with the purpose of obtaining a complete and precise computer model of the recognized area. That model will simplify further both planning and decision making activities performed by other agents in charge of the execution of more complex tasks related to the aforementioned environment (rescue personnel, fire brigades, etc.). Hence, the goals of this project are similar to those of the Robocup Rescue Robot League, although we do not pretend to participate in the tournament.

3d_reconstruct

The team of explorers is constituted by two Koala robots and a Pioneer P2-AT robot. The three robots are all-terrain. Each Koala mounts two ultrasound-based range sensors and a ring of infrared proximity sensors. On the other hand, the P2-AT is endowed with a ring of ultrasound-based range sensors, a trinocular Color Digiclops camera and two off-the-shelf webcams. Those sensors are controlled by an embedded computer. The three robots may communicate with each other and with external computers through a Bluetooth wireless network.

The specific goal of this project is that both Koala robots deploy over an unknown environment in a coordinated manner, obtaining a map of it (occupancy grid) which indicates the location of free areas, as well as obstacles detected by both the ultrasound sensors and the infrared ring. From this map, the P2-AT robot must wander over the environment with the aim of building a detailed 3D model of the scene from the images acquired by the stereo camera. The P2-AT robot will preferentially move towards areas with possible victims (unknown objects whose appearance does not coincide with the most part of the environment). Unknown objects will be identified by the Koalas through information provided by their infrared sensors, and, alternatively, by the P2-AT after applying texture analysis to the images acquired by the trinocular camera and/or the webcams.

This project is a part of a larger coordinated project entitled: Design and Planning of Dynamic Physical Agents (“Diseno y Planificacion de Agentes Fisicos Dinamicos”) MCYT DPI2001-2094-C03.

Read More