Visual triage: A bag-of-words experience selector for long-term visual route following.

ICRA(2017)

引用 29|浏览33
暂无评分
摘要
Our work builds upon Visual Teach u0026 Repeat 2 (VTu0026R2): a vision-in-the-loop autonomous navigation system that enables the rapid construction of route networks, safely built through operator-controlled driving. Added routes can be followed autonomously using visual localization. To enable long-term operation that is robust to appearance change, its Multi-Experience Localization (MEL) leverages many previously driven experiences when localizing to the manually taught network. While this multi-experience method is effective across appearance change, the computation becomes intractable as the number of experiences grows into the tens and hundreds. This paper introduces an algorithm that prioritizes experiences most relevant to live operation, limiting the number of experiences required for localization. The proposed algorithm uses a visual Bag-of-Words description of the live view to select relevant experiences based on what the vehicle is seeing right now, without having to factor in all possible environmental influences on scene appearance. This system runs in the loop, in real time, does not require bootstrapping, can be applied to any pointfeature MEL paradigm, and eliminates the need for visual training using an online, local visual vocabulary. By picking a subset of visually similar experiences to the live view, we demonstrate safe, vision-in-the-loop route following over a 31 hour period, despite appearance as different as night and day.
更多
查看译文
关键词
visual triage,bag-of-words experience selector,long-term visual route following,visual teach & repeat 2,VT&R2,vision-in-the-loop autonomous navigation system,route network construction,operator-controlled driving,visual localization,multiexperience localization,environmental influences,real time loop system,pointfeature MEL paradigm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要