Inertial Sensor Data Integration in Computer Vision Systems
|Project Type: MSc Project|
|Research Field: Multisensor Fusion and Multirobot Systems for 3D Reconstruction|
|Sponsors: JNICT scholarship under the PRAXIS XXI program|
|Time span: -02/2002|
Advanced sensor systems, exploring high integration of multiple sensorial modalities, have been significantly increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In this work we explore the cooperation between image and inertial sensors, motivated by what happens with the vestibular system and vision in humans and animals. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of three-dimensional structure.
In this work we overview the currently available low-cost inertial sensors. Using some of these sensors, we have built an inertial system prototype and coupled it to the vision system used in this work. The vision system has a set of stereo cameras with vergence. Using the information about the vision system’s attitude in space, given by the inertial sensors, we obtained some interesting results. we use the inertial information about the vertical to infer one of the intrinsic parameters of the visual sensor - the focal distance. The process involves having at least one image vanishing point, and tracing an artificial horizon.
Based on the integration of inertial and visual information, we was able to detect threedimensional world features such as the ground plane and vertical features. Relying on the known vertical reference, and a few system parameters, we was able to determine the ground plane geometric parameters and the stereo pair mapping of image points that belong to the ground plane. This enabled the segmentation and three-dimensional reconstruction of ground plane patches. It was also used to identify the three-dimensional vertical structures in a scene. Since the vertical reference does not give a heading, image vanishing points can be used as an external heading reference. These features can be used to build a metric map useful to improve mobile robot navigation and autonomy.