You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions.

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2020)

引用 61|浏览9
暂无评分
摘要
The body pose of a person wearing a camera is of great interest for applications in augmented reality, healthcare, and robotics, yet much of the personu0027s body is out of view for a typical wearable camera. We propose a learning-based approach to estimate the camera weareru0027s 3D body pose from egocentric video sequences. Our key insight is to leverage interactions with another person---whose body pose we can directly observe---as a signal inherently linked to the body pose of the first-person subject. We show that since interactions between individuals often induce a well-ordered series of back-and-forth responses, it is possible to learn a temporal model of the interlinked poses even though one party is largely out of view. We demonstrate our idea on a variety of domains with dyadic interaction and show the substantial impact on egocentric body pose estimation, which improves the state of the art. Video results are available at this http URL
更多
查看译文
关键词
egocentric body pose estimation,temporal model,camera wearer 3D body pose estimation,robotics,healthcare,You2Me,dyadic interaction,interlinked poses,egocentric video sequences,camera wearer,learning-based approach,typical wearable camera,augmented reality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要