Seeing Skin in Reduced Coordinates

2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)(2017)

引用 0|浏览36
暂无评分
摘要
We present a skin tracking and reconstruction method that uses a monocular camera and a depth sensor to recover skin sliding motions on the surface of a deforming object. Such depth cameras are widely available. Our key idea is to use a reduced coordinate framework that implicitly constrains skin to conform to the shape of the underlying object when it slides. The skin configuration in 3D can then be efficiently reconstructed by tracking two dimensional skin features in video. This representation is well suited for tracking subtle skin movements in the upper face and on the hand. The reconstructed skin motions have many uses, including synthesizing and retargeting animations, recognizing facial expressions, and for learning datadriven models of skin movement. In our face tracking examples, we recover subtle but important details of skin movement around the eyes. We validated the algorithm using a hand gesture sequence with known skin motion, recovering skin sliding motion with a low reconstruction error.
更多
查看译文
关键词
seeing skin,reduced coordinates,skin tracking,skin reconstruction,monocular camera,depth sensor,object deformation,facial expressions recognition,learning data-driven models,skin movement,hand gesture sequence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要