Positive and negative explanation effects in human–agent teams

Bryan Lavender,Sami Abuhaimed,Sandip Sen

AI and Ethics(2024)

引用 0|浏览13
暂无评分
摘要
Improving agent capabilities and increasing availability of computing platforms and internet connectivity allows more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human–agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of their partner’s capabilities, and the agent, acting as the task allocator, must adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers’ outlook, including factors, such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust/confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.
更多
查看译文
关键词
Human-agent teams,Explanations,Team performance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要