Unavoidable social contagion of false memory from robots to humans.

The American psychologist(2023)

引用 0|浏览1
暂无评分
摘要
Many of us interact with voice- or text-based conversational agents daily, but these conversational agents may unintentionally retrieve misinformation from human knowledge databases, confabulate responses on their own, or purposefully spread disinformation for political purposes. Does such misinformation or disinformation become part of our memory to further misguide our decisions? If so, can we prevent humans from suffering such social contagion of false memory? Using a social contagion of memory paradigm, here, we precisely controlled a social robot as an example of these emerging conversational agents. In a series of two experiments (ΣN = 120), the social robot occasionally misinformed participants prior to a recognition memory task. We found that the robot was as powerful as humans at influencing others. Despite the supplied misinformation being emotion- and value-neutral and hence not intrinsically contagious and memorable, 77% of the socially misinformed words became the participants' false memory. To mitigate such social contagion of false memory, the robot also forewarned the participants about its reservation toward the misinformation. However, one-time forewarnings failed to reduce false memory contagion. Even relatively frequent, item-specific forewarnings could not prevent warned items from becoming false memory, although such forewarnings helped increase the participants' overall cautiousness. Therefore, we recommend designing conversational agents to, at best, avoid providing uncertain information or, at least, provide frequent forewarnings about potentially false information. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要