Combining emotion and facial nonmanual signals in synthesized american sign language.
ASSETS(2012)
摘要
ABSTRACTTranslating from English to American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. Previous avatars were hampered by an inability to portray emotion and facial nonmanual signals that occur at the same time. A new animation system addresses this challenge. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. For each animation, participants were able to identify both nonmanual signals and emotional states. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes can move an avatar's brows in opposing directions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络