Visual-lidar odometry and mapping: low-drift, robust, and fast

IEEE International Conference on Robotics and Automation(2015)

引用 796|浏览232
暂无评分
摘要
Here, we present a general framework for combining visual odometry and lidar odometry in a fundamental and first principle method. The method shows improvements in performance over the state of the art, particularly in robustness to aggressive motion and temporary lack of visual features. The proposed on-line method starts with visual odometry to estimate the ego-motion and to register point clouds from a scanning lidar at a high frequency but low fidelity. Then, scan matching based lidar odometry refines the motion estimation and point cloud registration simultaneously.We show results with datasets collected in our own experiments as well as using the KITTI odometry benchmark. Our proposed method is ranked #1 on the benchmark in terms of average translation and rotation errors, with a 0.75% of relative position drift. In addition to comparison of the motion estimation accuracy, we evaluate robustness of the method when the sensor suite moves at a high speed and is subject to significant ambient lighting changes.
更多
查看译文
关键词
distance measurement,image matching,image registration,motion estimation,optical radar,KITTI odometry benchmark,aggressive motion,ambient lighting changes,ego-motion estimation,first principle method,point cloud registration,scan matching based lidar odometry,temporary lack of visual features,visual-lidar mapping,visual-lidar odometry,
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要