Solving a Cloze Test for Generative Commonsense Question Answering.

ICCC(2022)

引用 0|浏览4
暂无评分
摘要
Commonsense question answering has always been a challenging task due to the wide-domain coverage and the implicity of commonsense knowledge. Few works are tackling the answer generation of commonsense questions, which is more difficult than multiple-choice. This motivates us to delve into the answer generation ability of pretrained language models (PLMs). Other than utilizing knowledge bases to extract commonsense-related knowledge to answer commonsense questions, we exploit the latent knowledge within PLMs to solve this task. In this work, we reformulate this generative task into a masked token prediction task and experiment with masked language models (MLMs) and generative language models (GLMs). Experimental results on the ProtoQA dataset demonstrate the effectiveness of our proposed method. Our work finds that both MLMs and GLMs are good at masked token prediction and that PLMs have acquired commonsense knowledge through large-corpus pre-training.
更多
查看译文
关键词
Commonsense question answering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要