UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models
arxiv(2023)
摘要
Teaching Visual Question Answering (VQA) models to refrain from answering
unanswerable questions is necessary for building a trustworthy AI system.
Existing studies, though have explored various aspects of VQA but somewhat
ignored this particular attribute. This paper aims to bridge the research gap
by contributing a comprehensive dataset, called UNK-VQA. The dataset is
specifically designed to address the challenge of questions that models do not
know. To this end, we first augment the existing data via deliberate
perturbations on either the image or question. In specific, we carefully ensure
that the question-image semantics remain close to the original unperturbed
distribution. By this means, the identification of unanswerable questions
becomes challenging, setting our dataset apart from others that involve mere
image replacement. We then extensively evaluate the zero- and few-shot
performance of several emerging multi-modal large models and discover their
significant limitations when applied to our dataset. Additionally, we also
propose a straightforward method to tackle these unanswerable questions. This
dataset, we believe, will serve as a valuable benchmark for enhancing the
abstention capability of VQA models, thereby leading to increased
trustworthiness of AI systems. We have made the dataset
(https://github.com/guoyang9/UNK-VQA) available to facilitate further
exploration in this area.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要