Suppr超能文献

机器人对人类不可避免的虚假记忆的社会传染。

Unavoidable social contagion of false memory from robots to humans.

机构信息

Department of Psychology, National Taiwan University.

Department of Psychology, Stony Brook University.

出版信息

Am Psychol. 2024 Feb-Mar;79(2):285-298. doi: 10.1037/amp0001230. Epub 2023 Nov 20.

Abstract

Many of us interact with voice- or text-based conversational agents daily, but these conversational agents may unintentionally retrieve misinformation from human knowledge databases, confabulate responses on their own, or purposefully spread disinformation for political purposes. Does such misinformation or disinformation become part of our memory to further misguide our decisions? If so, can we prevent humans from suffering such social contagion of false memory? Using a social contagion of memory paradigm, here, we precisely controlled a social robot as an example of these emerging conversational agents. In a series of two experiments (Σ = 120), the social robot occasionally misinformed participants prior to a recognition memory task. We found that the robot was as powerful as humans at influencing others. Despite the supplied misinformation being emotion- and value-neutral and hence not intrinsically contagious and memorable, 77% of the socially misinformed words became the participants' false memory. To mitigate such social contagion of false memory, the robot also forewarned the participants about its reservation toward the misinformation. However, one-time forewarnings failed to reduce false memory contagion. Even relatively frequent, item-specific forewarnings could not prevent warned items from becoming false memory, although such forewarnings helped increase the participants' overall cautiousness. Therefore, we recommend designing conversational agents to, at best, avoid providing uncertain information or, at least, provide frequent forewarnings about potentially false information. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

摘要

我们许多人每天都会与基于语音或文本的对话式代理进行交互,但这些对话式代理可能会无意中从人类知识数据库中检索到错误信息,自行编造回复,或者出于政治目的故意传播虚假信息。这种错误信息或虚假信息是否会成为我们记忆的一部分,从而进一步误导我们的决策?如果是这样,我们能否防止人类遭受这种虚假记忆的社会传播?在这里,我们使用记忆的社会传播范式,精确地控制一个社交机器人作为这些新兴对话式代理的示例。在一系列两项实验(Σ=120)中,社交机器人在识别记忆任务之前偶尔会向参与者提供错误信息。我们发现机器人在影响他人方面与人类一样强大。尽管提供的错误信息是情绪和价值中立的,因此没有内在的传染性和可记性,但 77%的社交错误信息成为了参与者的虚假记忆。为了减轻这种虚假记忆的社会传播,机器人还预先警告了参与者它对错误信息的保留意见。然而,一次的预先警告并不能减少虚假记忆的传播。即使是相对频繁的、项目特定的预先警告也不能防止被警告的项目成为虚假记忆,尽管这些预先警告有助于提高参与者的整体谨慎性。因此,我们建议设计对话式代理,最好避免提供不确定的信息,或者至少提供关于潜在虚假信息的频繁预先警告。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验