Correspondence-free online human motion retargeting
CoRR(2023)
摘要
We present a data-driven framework for unsupervised human motion retargeting
that animates a target subject with the motion of a source subject. Our method
is correspondence-free, requiring neither spatial correspondences between the
source and target shapes nor temporal correspondences between different frames
of the source motion. This allows to animate a target shape with arbitrary
sequences of humans in motion, possibly captured using 4D acquisition platforms
or consumer devices. Our method unifies the advantages of two existing lines of
work, namely skeletal motion retargeting, which leverages long-term temporal
context, and surface-based retargeting, which preserves surface details, by
combining a geometry-aware deformation model with a skeleton-aware motion
transfer approach. This allows to take into account long-term temporal context
while accounting for surface details. During inference, our method runs online,
i.e. input can be processed in a serial way, and retargeting is performed in a
single forward pass per frame. Experiments show that including long-term
temporal context during training improves the method's accuracy for skeletal
motion and detail preservation. Furthermore, our method generalizes to
unobserved motions and body shapes. We demonstrate that our method achieves
state-of-the-art results on two test datasets and that it can be used to
animate human models with the output of a multi-view acquisition platform. Code
is available at
.
更多查看译文
关键词
motion,human,correspondence-free
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要