谷歌浏览器插件
订阅小程序
在清言上使用

Combining Detection and Tracking for Human Pose Estimation in Videos

Computer Vision and Pattern Recognition(2020)

引用 137|浏览204
暂无评分
摘要
We propose a novel top-down approach that tackles the problem of multi-person human pose estimation and tracking in videos. In contrast to existing top-down approaches, our method is not limited by the performance of its person detector and can predict the poses of person instances not localized. It achieves this capability by propagating known person locations forward and backward in time and searching for poses in those regions. Our approach consists of three components: (i) a Clip Tracking Network that performs body joint detection and tracking simultaneously on small video clips; (ii) a Video Tracking Pipeline that merges the fixed-length tracklets produced by the Clip Tracking Network to arbitrary length tracks; and (iii) a Spatial-Temporal Merging procedure that refines the joint locations based on spatial and temporal smoothing terms. Thanks to the precision of our Clip Tracking Network and our merging procedure, our approach produces very accurate joint predictions and can fix common mistakes on hard scenarios like heavily entangled people. Our approach achieves state-of-the-art results on both joint detection and tracking, on both the PoseTrack 2017 and 2018 datasets, and against all top-down and bottom-down approaches.
更多
查看译文
关键词
multiperson human pose estimation,person detector,person instances,body joint detection,video clips,joint predictions,video tracking pipeline,known person locations,top-down approaches,clip tracking network,fixed-length tracklets,bottom-down approaches,PoseTrack 2018 datasets,PoseTrack 2017 datasets,spatial smoothing terms,temporal smoothing terms,joint locations,spatial-temporal merging procedure
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要