Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge
international conference on robotics and automation(2017)
摘要
Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http://apc.cs.princeton.edu/
更多查看译文
关键词
multi-view self-supervised deep learning,6D pose estimation,Amazon picking challenge,robot warehouse automation,autonomous warehouse pick-and-place system,robust vision,object recognition,multiview RGB-D data,self-supervised learning,data-driven learning,MIT-Princeton team system,stowing tasks,picking tasks,convolutional neural network,3D object models,6D object pose segmentation,deep neural network training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络