谷歌浏览器插件
订阅小程序
在清言上使用

Multi-Task Learning for Situated Multi-Domain End-to-End Dialogue Systems

Po-Nien Kung, Chung-Cheng Chang,Tse-Hsuan Yang, Hsin-Kai Hsu,Yu-Jia Liou,Yun-Nung Chen

ArXiv(2021)

引用 0|浏览0
暂无评分
摘要
Task-oriented dialogue systems have been a promising area in the NLP field. Previous work (Hosseini-Asl et al. 2020; Wolf et al. 2019; Zhao and Eskenazi 2016; Budzianowski and Vulić 2019) showed the effectiveness of using a single GPT-2 based model to predict belief states and responses via causal language modeling. In this paper, we leverage multitask learning techniques to train a GPT-2 based model on a more challenging dataset (Moon et al. 2020; Crook et al. 2019) with multiple domains, multiple modalities, and more diversity in output formats. Using only a single model, our method achieves better performance on all sub-tasks, across domains, compared to task and domain-specific models. Furthermore, we evaluated several proposed strategies for GPT-2 based dialogue systems with comprehensive ablation studies, showing that all techniques can further improve the performance.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要