Conversational Co-Speech Gesture Generation via Modeling Dialog Intention, Emotion, and Context with Diffusion Models
CoRR(2023)
摘要
Audio-driven co-speech human gesture generation has made remarkable
advancements recently. However, most previous works only focus on single person
audio-driven gesture generation. We aim at solving the problem of
conversational co-speech gesture generation that considers multiple
participants in a conversation, which is a novel and challenging task due to
the difficulty of simultaneously incorporating semantic information and other
relevant features from both the primary speaker and the interlocutor. To this
end, we propose CoDiffuseGesture, a diffusion model-based approach for
speech-driven interaction gesture generation via modeling bilateral
conversational intention, emotion, and semantic context. Our method synthesizes
appropriate interactive, speech-matched, high-quality gestures for
conversational motions through the intention perception module and emotion
reasoning module at the sentence level by a pretrained language model.
Experimental results demonstrate the promising performance of the proposed
method.
更多查看译文
关键词
Co-speech gesture generation,interaction gesture,dialog intention and emotion,multi-agent conversational interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要