Towards Autonomous Driving - a Multi-Modal 360° Perception Proposal.

ITSC(2020)

引用 6|浏览18
暂无评分
摘要
In this paper, a multi-modal 360° framework for 3D object detection and tracking for autonomous vehicles is presented. The process is divided into four main stages. First, images are fed into a CNN network to obtain instance segmentation of the surrounding road participants. Second, LiDAR-to-image association is performed for the estimated mask proposals. Then, the isolated points of every object are processed by a PointNet ensemble to compute their corresponding 3D bounding boxes and poses. Lastly, a tracking stage based on Unscented Kalman Filter is used to track the agents along time. The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection. A wide variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack in a real autonomous driving application.
更多
查看译文
关键词
isolated points,tracking stage,unscented Kalman filter,sensor fusion configuration,road environment detection,autonomous vehicle,perception stack,autonomous driving application,autonomous driving,perception proposal,CNN network,instance segmentation,LiDAR-to-image association,mask proposals,pointnet ensemble,multimodal 360 framework,3D object detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要