Leveraging Deep Learning Based Object Detection For Localising Autonomous Personal Mobility Devices In Sparse Maps

2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC)(2019)

引用 3|浏览20
暂无评分
摘要
This paper presents a low cost, resource efficient localisation approach for autonomous driving in GPS denied environments. One of the most challenging aspects of traditional landmark based localisation in the context of autonomous driving, is the necessity to accurately and frequently detect landmarks. We leverage the state of the art deep learning framework, YOLO (You Only Look Once), to carry out this important perceptual task using data obtained from monocular cameras. Extracted bearing only information from the YOLO framework, and vehicle odometry, is fused using an Extended Kalman Filter (EKF) to generate an estimate of the location of the autonomous vehicle, together with it's associated uncertainty. This approach enables us to achieve real-time sub metre localisation accuracy, using only a sparse map of an outdoor urban environment. The broader motivation of this research is to improve the safety and reliability of Personal Mobility Devices (PMDs) through autonomous technology. Thus, all the ideas presented here are demonstrated using an instrumented mobility scooter platform.
更多
查看译文
关键词
monocular cameras,YOLO framework,vehicle odometry,extended kalman filter,autonomous vehicle,sparse map,outdoor urban environment,autonomous technology,instrumented mobility scooter platform,deep learning based object detection,resource efficient localisation approach,autonomous driving,autonomous personal mobility devices,real-time sub metre localisation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要