Implementing Planning KL-Divergence.

ECCV Workshops(2020)

引用 0|浏览90
暂无评分
摘要
Variants of accuracy and precision are the gold-standard by which the computer vision community measures progress of perception algorithms. One reason for the ubiquity of these metrics is that they are largely task-agnostic; we in general seek to detect zero false negatives or positives. The downside of these metrics is that, at worst, they penalize all incorrect detections equally without conditioning on the task or scene, and at best, heuristics need to be chosen to ensure that different mistakes count differently. In this paper, we revisit “Planning KL-Divergence”, a principled metric for 3D object detection specifically for the task of self-driving. The core idea behind PKL is to isolate the task of object detection and measure the impact the produced detections would induce on the downstream task of driving. We summarize functionality provided by our python package planning-centric-metrics that implements PKL. nuScenes is in the process of incorporating PKL into their detection leaderboard and we hope that the convenience of our implementation encourages other leaderboards to follow suit.
更多
查看译文
关键词
planning,kl-divergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要