Integraton of Vision and Inertial Sensing
|Project Type: PhD Project|
|Research Field: Multisensor Fusion and Multirobot Systems for 3D Reconstruction|
|Time span: 12/2003-02/2006|
Inertial sensors coupled to cameras can provide valuable data about camera ego-motion and how world features are expected to be oriented. Object recognition and tracking benefits from both static and inertial information. Several human vision tasks rely on the inertial data provided by the vestibular system. Artificial systems should also exploit this sensor fusion.
Micromachining enabled the development of low-cost single chip inertial sensors. These can be easily incorporated alongside the camera's imaging sensor, providing an artificial vestibular system.
We will explore some of the benefits of combining the two sensing modalities, and how gravity can be used as a vertical reference. We will also focus on how the two sensors can be cross-calibrated so that they can be used in static and dynamic situations.
The inertial sensed gravity provides a vertical reference for monocular and stereo vision systems, establishing an artificial horizon, enabling segmentation of vertical features and providing restrictions for stereo correspondence of ground plane points and 3D vertical features. This vertical reference can also enable stereo depth map alignment and ground segmentation, reducing the dimensionality of the full registration problem.
To perform independent motion segmentation for a moving robotic, observer we explored the fusion of optical flow and stereo techniques with data from the inertial and magnetic sensors. A depth map registration and independent motion segmentation is presented that explores the cooperation between distinct sensing modalities.