Spearing Emily R, Gile Constantina I, Fogwill Amy L, Prike Toby, Swire-Thompson Briony, Lewandowsky Stephan, Ecker Ullrich K H
School of Psychological Science, The University of Western Australia, Perth, Western Australia, Australia.
School of Psychology, The University of Adelaide, Adelaide, South Australia, Australia.
R Soc Open Sci. 2025 Jun 25;12(6):242148. doi: 10.1098/rsos.242148. eCollection 2025 Jun.
Despite widespread concerns over AI-generated misinformation, its impact on people's reasoning and the effectiveness of countermeasures remain unclear. This study examined whether a pre-emptive, source-focused inoculation-designed to lower trust in AI-generated information-could reduce its influence on reasoning. This approach was compared with a retroactive, content-focused debunking, as well as a simple disclaimer that AI-generated information may be misleading, as often seen on real-world platforms. Additionally, the extent to which trust in AI-generated information is malleable was also tested with an intervention designed to boost trust. Across two experiments (total = 1223), a misleading AI-generated article influenced reasoning regardless of its alleged source (human or AI). In both experiments, the inoculation reduced general trust in AI-generated information, but did not significantly reduce the misleading article's specific influence on reasoning. The additional trust-boosting and disclaimer interventions used in Experiment 1 also had no impact. By contrast, debunking of misinformation in Experiment 2 effectively reduced its impact, although only a combination of inoculation and debunking eliminated misinformation influence entirely. Findings demonstrate that generative AI can be a persuasive source of misinformation, potentially requiring multiple countermeasures to negate its effects.
尽管人们普遍担忧人工智能生成的错误信息,但其对人们推理的影响以及应对措施的有效性仍不明确。本研究考察了一种先发制的、以来源为重点的预防措施(旨在降低对人工智能生成信息的信任)是否能减少其对推理的影响。该方法与追溯性的、以内容为重点的辟谣,以及在现实世界平台上常见的简单声明(即人工智能生成的信息可能具有误导性)进行了比较。此外,还通过一项旨在增强信任的干预措施测试了对人工智能生成信息的信任可塑性程度。在两项实验(共1223人)中,一篇具有误导性的人工智能生成的文章影响了推理,无论其声称的来源是人类还是人工智能。在两项实验中,预防措施都降低了对人工智能生成信息的总体信任,但并没有显著降低这篇误导性文章对推理的具体影响。实验1中使用的额外增强信任和声明干预措施也没有效果。相比之下,实验2中对错误信息的辟谣有效地降低了其影响,不过只有预防措施和辟谣相结合才能完全消除错误信息的影响。研究结果表明,生成式人工智能可能是错误信息的一个有说服力的来源,可能需要多种应对措施来消除其影响。