Feature-based Visual Odometry for Bronchoscopy: A Dataset and Benchmark

2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2023)

引用 0|浏览0
暂无评分
摘要
Bronchoscopy is a medical procedure that involves the insertion of a flexible tube with a camera into the airways to survey, diagnose and treat lung diseases. Due to the complex branching anatomical structure of the bronchial tree and the similarity of the inner surfaces of the segmental airways, navigation systems are now being routinely used to guide the operator during procedures to access the lung periphery. Current navigation systems rely on sensor-integrated bronchoscopes to track the position of the bronchoscope in real-time. This approach has limitations, including increased cost and limited use in non-specialized settings. To address this issue, researchers have proposed visual odometry algorithms to track the bronchoscope camera without the need for external sensors. However, due to the lack of publicly available datasets, limited progress is made. To this end, we have developed a database of bronchoscopy videos in a phantom lung model and ex-vivo human lungs. The dataset contains 34 video sequences with over 23,000 frames with odometry ground truth data collected using electromagnetic tracking sensors. With our dataset, we empower the robotics and machine learning community to advance the field. We share our insights on challenges in endoscopic visual odometry. Furthermore, we provide benchmark results for this dataset. State-of-the-art feature extraction algorithms including SIFT, ORB, Superpoint, Shi-Tomasi, and LoFTR are tested on this dataset. The benchmark results demonstrate that the LoFTR algorithm outperforms other approaches, but still has significant errors in the presence of rapid movements and occlusions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要