Monocular Camera Localization In 3d Lidar Maps

2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2016)

引用 176|浏览102
暂无评分
摘要
Localizing a camera in a given map is essential for vision-based navigation. In contrast to common methods for visual localization that use maps acquired with cameras, we propose a novel approach, which tracks the pose of monocular camera with respect to a given 3D LiDAR map. We employ a visual odometry system based on local bundle adjustment to reconstruct a sparse set of 3D points from image features. These points are continuously matched against the map to track the camera pose in an online fashion. Our approach to visual localization has several advantages. Since it only relies on matching geometry, it is robust to changes in the photometric appearance of the environment. Utilizing panoramic LiDAR maps additionally provides viewpoint invariance. Yet low-cost and lightweight camera sensors are used for tracking We present real-world experiments demonstrating that our method accurately estimates the 6-DoF camera pose over long trajectories and under varying conditions.
更多
查看译文
关键词
monocular camera localization,3D LiDAR maps,vision-based navigation,visual localization,visual odometry system,local bundle adjustment,3D points,image features,matching geometry,photometric appearance,panoramic LiDAR maps,viewpoint invariance,lightweight camera sensors,6-DoF camera pose
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要