Theory of Mind Improves Human's Trust in an Iterative Human-Robot Game.

HAI(2021)

引用 6|浏览1
暂无评分
摘要
Trust is a critical issue in human–robot interactions as it is at the base of the establishment of solid relationships. Theory of Mind (ToM) is the cognitive skill that allows us to understand what others think and believe. Several studies in HRI and psychology suggest that trust and ToM are interdependent concepts since we trust another agent based on our representation of its actions, beliefs, and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we aim to examine whether the perception of ToM abilities on a robotic agent influences human-robot trust over time in an iterative game scenario. To this end, participants played an Investment Game with a humanoid robot (Pepper) that was presented as having either low-level ToM or high-level ToM. During the game, the participants were asked to pick a sum of money to invest in the robot. The amount invested was used as the main measurement of human-robot trust. Our experimental results show that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要