Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task

Computers in Human Behavior(2023)

引用 6|浏览65
暂无评分
摘要
Understanding the recommendations of an artificial intelligence (AI) based assistant for decision-making is especially important in high-risk tasks, such as deciding whether a mushroom is edible or poisonous. To foster user understanding and appropriate trust in such systems, we assessed the effects of explainable artificial intelligence (XAI) methods and an educational intervention on AI-assisted decision-making behavior in a 2 × 2 between subjects online experiment with N=410 participants. We developed a novel use case in which users go on a virtual mushroom hunt and are tasked with picking edible and leaving poisonous mushrooms. Users were provided with an AI-based app that showed classification results of mushroom images. To manipulate explainability, one subgroup additionally received attribution-based and example-based explanations of the AI’s predictions; for the educational intervention one subgroup received additional information on how the AI worked. We found that the group that received explanations outperformed that which did not and showed better calibrated trust levels. Contrary to our expectations, we found that the educational intervention, domain-specific (i.e., mushroom) knowledge, and AI knowledge had no effect on performance. We discuss practical implications and introduce the mushroom-picking task as a promising use case for XAI research.
更多
查看译文
关键词
XAI,AI literacy,Domain-specific knowledge,Mushroom identification,Trust calibration,Visual explanation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要