Transformer-S2A: Robust and Efficient Speech-to-Animation

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 14|浏览42
暂无评分
摘要
We propose a novel robust and efficient Speech-to-Animation (S2A) approach for synchronized facial animation generation in human-computer interaction. Compared with conventional approaches, the proposed approach utilizes phonetic posteriorgrams (PPGs) of spoken phonemes as input to ensure the cross-language and cross-speaker ability, and introduces corresponding prosody features (i.e. pitch and energy) to further enhance the expression of generated animation. Mixture-of-experts (MOE)-based Transformer is employed to better model contextual information while provide significant optimization on computation efficiency. Experiments demonstrate the effectiveness of the proposed approach on both objective and subjective evaluation with 17× inference speedup compared with the state-of-the-art approach.
更多
查看译文
关键词
Speech-to-Animation,Transformer,Phonetic Posteriorgrams,Mixture-of-Experts
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要