Investigating Trust in Human-AI Collaboration for a Speech-Based Data Analytics Task

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION(2024)

引用 0|浏览1
暂无评分
摘要
Complex real-world problems can benefit from the collaboration between humans and artificial intelligence (AI) to achieve reliable decision-making. We investigate trust in a human-in-the-loop decision-making task, in which participants with background on psychological sciences collaborate with an explainable AI system for estimating one's anxiety level from speech. The AI system relies on the explainable boosting machine (EBM) model which takes prosodic features as the input and estimates the anxiety level. Trust in AI is quantified via self-reported (i.e., administered via a questionnaire) and behavioral (i.e., computed as user-AI agreement) measures, which are positively correlated with each other. Results indicate that humans and AI depict differences in performance depending on the characteristics of the specific case under review. Overall, human annotators' trust in the AI increases over time, with momentary decreases after the AI partner makes an error. Annotators further differ in terms of appropriate trust calibration in the AI system, with some annotators over-trusting and some under-trusting the system. Personality characteristics (i.e., agreeableness, conscientiousness) and overall propensity to trust machines further affect the level of trust in the AI system, with these findings approaching statistical significance. Results from this work will lead to a better understanding of human-AI collaboration and will guide the design of AI algorithms toward supporting better calibration of user trust.
更多
查看译文
关键词
Explainable AI,transparency,human trust,trust calibration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要