谷歌浏览器插件
订阅小程序
在清言上使用

Trustworthiness Perceptions of Machine Learning Algorithms: The Influence of Confidence Intervals.

Gene M. Alarcon,Sarah A. Jessup, Scott Meyers, Dexter Johnson, Walter D. Bennette

International Conferences on Human-Machine Systems(2024)

引用 0|浏览0
暂无评分
摘要
Insufficient research has been conducted to investigate the impact of machine learning models on end-users’ trust. This study aims to bridge the gap and examine differences in psychological perceptions of trust between two machine learning models. Participants (N = 130) were recruited online and completed an image binning monitoring task with either an uncalibrated classification (UC) model or a calibrated classification (CC) model that provided confidence intervals about their decisions. The UC model was highly confident, regardless of accuracy, whereas the CC model was more calibrated between accuracy and confidence. Results revealed participants performed better on the task in the CC condition. Additionally, performance perceptions, purpose perceptions, and reliance intentions increased over time in the CC condition. However, there were no differences in process perceptions between conditions. Calibrated confidence intervals displayed by CC models have shown to be an effective means of increasing transparency and enhancing our understanding of trust in machines.
更多
查看译文
关键词
machine learning,trust,transparency,reliance,human-machine interaction,deep neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要