A Qualitative Analysis of Trust Dynamics in Human-Agent Teams (HATs)

Proceedings of the Human Factors and Ergonomics Society Annual Meeting(2022)

引用 0|浏览1
暂无评分
摘要
This study investigated how and why trust changes over time between differing levels of autonomy (LOA) for an intelligence, surveillance, and reconnaissance (ISR) task involving four unmanned aerial vehicles (UAV). Four LOAs were implemented in the study ranging from low to high (manual, advice, consent, veto). The analysis examined if trust increased, decreased, or stayed the same during transitions between two different LOAs. Through a thematic analysis, themes influencing the participant’s trust in agents were identified. Current findings suggest trust increases most when a participant moves between moderate LOAs. Specifically, moving from the condition in which the role of the agent is to provide advice, to the agent making decisions on their own with the consent of the human. Trust was also shown to increase when the human moved from the highest LOA (human merely vetoes) to the manual condition. The most prevalent themes influencing trust included perceived agent errors and false alarms.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要