Missing Pieces: How Framing Uncertainty Impacts Longitudinal Trust in AI Decision Aids – A Gig Driver Case Study
arxiv(2024)
摘要
Decision aids based on artificial intelligence (AI) are becoming increasingly
common. When such systems are deployed in environments with inherent
uncertainty, following AI-recommended decisions may lead to a wide range of
outcomes. In this work, we investigate how the framing of uncertainty in
outcomes impacts users' longitudinal trust in AI decision aids, which is
crucial to ensuring that these systems achieve their intended purposes. More
specifically, we use gig driving as a representative domain to address the
question: how does exposing uncertainty at different levels of granularity
affect the evolution of users' trust and their willingness to rely on
recommended decisions? We report on a longitudinal mixed-methods study (n =
51) where we measured the trust of gig drivers as they interacted with an
AI-based schedule recommendation tool. Statistically significant quantitative
results indicate that participants' trust in and willingness to rely on the
tool for planning depended on the perceived accuracy of the tool's estimates;
that providing ranged estimates improved trust; and that increasing prediction
granularity and using hedging language improved willingness to rely on the tool
even when trust was low. Additionally, we report on interviews with
participants which revealed a diversity of experiences with the tool,
suggesting that AI systems must build trust by going beyond general designs to
calibrate the expectations of individual users.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要