谷歌浏览器插件
订阅小程序
在清言上使用

P‐2.25: Research on Virtual Reality Field Based on Multimodal Emotion Recognition

Yanfei Wang, Jingliang Wang,Lijun Wang,Zhengping Li, Ying Li

SID Symposium Digest of Technical Papers(2023)

引用 0|浏览5
暂无评分
摘要
At present, emotion recognition has become a research hotspot in the field of pattern recognition. Considering the problems of incomplete information and strong interference in single‐modal emotion recognition, multimodal emotion recognition has been widely studied. Multimodal data includes, but is not limited to, emoji, text, and voice modality data. There are various ways to express emotion, among which expression, text and voice are the most direct and reliable emotional information carriers. Therefore, it is of great research and practical significance to comprehensively consider the emotion recognition research of expression, text and voice modalities, and to apply its research results to the field of virtual reality (referred to as VR). This paper analyzes the relevant situation of multimodal emotion recognition, extracts features of voice, text and expression, and then fuses them into multimodal for emotional analysis, and applies it to the VR field. The main work content is as follows: the relevant technologies of multimodal emotion recognition research in the field of VR are introduced, including deep learning related technologies, virtual reality technology, and multimodal fusion methods. In terms of deep learning, the focus is on convolutional neural networks and recurrent neural networks and their variants. In terms of virtual reality technology, the characteristics and applications of virtual reality are introduced. In terms of multimodal fusion, three commonly used fusion methods are introduced.
更多
查看译文
关键词
multimodal emotion recognition,virtual reality field
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要