谷歌浏览器插件
订阅小程序
在清言上使用

Keyframe Control of Music-driven 3D Dance Generation

Zhipeng Yang,Yu-Hui Wen,Shu-Yu Chen, Xiao Liu, Yuan Gao,Yong-Jin Liu,Lin Gao,Hongbo Fu

IEEE transactions on visualization and computer graphics(2024)

引用 0|浏览2
暂无评分
摘要
For 3D animators, choreography with artificial intelligence has attracted more attention recently. However, most existing deep learning methods mainly rely on music for dance generation and lack sufficient control over generated dance motions. To address this issue, we introduce the idea of keyframe interpolation for music-driven dance generation and present a novel transition generation technique for choreography. Specifically, this technique synthesizes visually diverse and plausible dance motions by using normalizing flows to learn the probability distribution of dance motions conditioned on a piece of music and a sparse set of key poses. Thus, the generated dance motions respect both the input musical beats and the key poses. To achieve a robust transition of varying lengths between the key poses, we introduce a time embedding at each timestep as an additional condition. Extensive experiments show that our model generates more realistic, diverse, and beat-matching dance motions than the compared state-of-the-art methods, both qualitatively and quantitatively. Our experimental results demonstrate the superiority of the keyframe-based control for improving the diversity of the generated dance motions.
更多
查看译文
关键词
Humanities,Animation,Three-dimensional displays,Deep learning,Probabilistic logic,Interpolation,Task analysis,3D animation,choreography,generative flows,multi-modal,music-driven
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要