Urban Driving with Conditional Imitation Learning

2020 IEEE International Conference on Robotics and Automation (ICRA)(2020)

引用 137|浏览182
暂无评分
摘要
Hand-crafting generalised decision-making rules for real-world urban autonomous driving is hard. Alternatively, learning behaviour from easy-to-collect human driving demonstrations is appealing. Prior work has studied imitation learning (IL) for autonomous driving with a number of limitations. Examples include only performing lane-following rather than following a user-defined route, only using a single camera view or heavily cropped frames lacking state observability, only lateral (steering) control, but not longitudinal (speed) control and a lack of interaction with traffic. Importantly, the majority of such systems have been primarily evaluated in simulation - a simple domain, which lacks real-world complexities. Motivated by these challenges, we focus on learning representations of semantics, geometry and motion with computer vision for IL from human driving demonstrations. As our main contribution, we present an end-to-end conditional imitation learning approach, combining both lateral and longitudinal control on a real vehicle for following urban routes with simple traffic. We address inherent dataset bias by data balancing, training our final policy on approximately 30 hours of demonstrations gathered over six months. We evaluate our method on an autonomous vehicle by driving 35km of novel routes in European urban streets.
更多
查看译文
关键词
real-world urban autonomous driving,human driving demonstrations,user-defined route,single camera view,heavily cropped frames,lateral control,longitudinal control,real-world complexities,end-to-end conditional imitation learning approach,urban routes,simple traffic,autonomous vehicle,European urban streets,urban driving,hand-crafting generalised decision-making rules
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要