UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause
arxiv(2024)
摘要
Multimodal emotion recognition in conversation (MERC) and multimodal
emotion-cause pair extraction (MECPE) has recently garnered significant
attention. Emotions are the expression of affect or feelings; responses to
specific events, thoughts, or situations are known as emotion causes. Both are
like two sides of a coin, collectively describing human behaviors and intents.
However, most existing works treat MERC and MECPE as separate tasks, which may
result in potential challenges in integrating emotion and cause in real-world
applications. In this paper, we propose a Unified Multimodal Emotion
recognition and Emotion-Cause analysis framework (UniMEEC) to explore the
causality and complementarity between emotion and emotion cause. Concretely,
UniMEEC reformulates the MERC and MECPE tasks as two mask prediction problems,
enhancing the interaction between emotion and cause. Meanwhile, UniMEEC shares
the prompt learning among modalities for probing modality-specific knowledge
from the Pre-trained model. Furthermore, we propose a task-specific
hierarchical context aggregation to control the information flow to the task.
Experiment results on four public benchmark datasets verify the model
performance on MERC and MECPE tasks and achieve consistent improvements
compared with state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要