Ada: Adversarial Data Augmentation For Object Detection

2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)(2019)

引用 5|浏览89
暂无评分
摘要
The use of random perturbations of ground truth data, such as random translation or scaling of bounding boxes, is a common heuristic used for data augmentation that has been shown to prevent overfitting and improve generalization. Since the design of data augmentation is largely guided by reported best practices, it is difficult to understand if those design choices are optimal. To provide a more principled perspective, we develop a game-theoretic interpretation of data augmentation in the context of object detection. We aim to find an optimal adversarial perturbations of the ground truth data (i.e., the worst case perturbations) that forces the object bounding box predictor to learn from the hardest distribution of perturbed examples for better test-time performance. We establish that the game-theoretic solution (Nash equilibrium) provides both an optimal predictor and optimal data augmentation distribution. We show that our adversarial method of training a predictor can significantly improve test-time performance for the task of object detection. On the ImageNet, Pascal VOC and MS-COCO object detection tasks, our adversarial approach improves performance by about 16%, 5%, and 2% respectively compared to the best performing data augmentation methods.
更多
查看译文
关键词
adversarial data augmentation,random translation,bounding boxes,game-theoretic interpretation,optimal adversarial perturbations,worst case perturbations,perturbed examples,test-time performance,optimal predictor,optimal data augmentation distribution,MS-COCO object detection tasks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要