Skip to main navigation menu Skip to main content Skip to site footer

Integrating Inertial Data to a Hybrid Direct-Indirect Visual SLAM System

Abstract

A contemporary trend in the field of simultaneous localization and mapping (SLAM) is the application of sensor fusion to improve performance. There are many sources of additional data, including but not limited to inertial measurement units (IMU), event cameras, and depth data. This paper introduces a visual monocular SLAM system that tightly combines visual photogrammetric data, visually extracted geometric information, and inertial data. Our work improves on the energy function developed by H-SLAM, designed for joint optimization of photometric and geometric residuals in tracking, by allowing it to also handle inertial residuals. Furthermore, our SLAM system shares H-SLAM's loop-closure mechanisms that are tightly coupled with the tracking process to ensure global consistency across large-scale maps. When tested on benchmarks, our system performs well compared to past SLAM systems that use photogrammetric, geometric, and inertial data and is competitive compared to state-of-the-art SLAM systems.
PDF