Suppr超能文献

利用人工智能检验政治说服理论。

Testing theories of political persuasion using AI.

作者信息

Argyle Lisa P, Busby Ethan C, Gubler Joshua R, Lyman Alex, Olcott Justin, Pond Jackson, Wingate David

机构信息

Department of Political Science, Brigham Young University, Provo, UT 84602.

Department of Computer Science, Brigham Young University, Provo, UT 84602.

出版信息

Proc Natl Acad Sci U S A. 2025 May 6;122(18):e2412815122. doi: 10.1073/pnas.2412815122. Epub 2025 May 2.

Abstract

Despite its importance to society and many decades of research, key questions about the social and psychological processes of political persuasion remain unanswered, often due to data limitations. We propose that AI tools, specifically generative large language models (LLMs), can be used to address these limitations, offering important advantages in the study of political persuasion. In two preregistered online survey experiments, we demonstrate the potential of generative AI as a tool to study persuasion and provide important insights about the psychological and communicative processes that lead to increased persuasion. Specifically, we test the effects of four AI-generated counterattitudinal persuasive strategies, designed to test the effectiveness of messages that include customization (writing messages based on a receiver's personal traits and beliefs), and elaboration (increased psychological engagement with the argument through interaction). We find that all four types of persuasive AI produce significant attitude change relative to the control and shift vote support for candidates espousing views consistent with the treatments. However, we do not find evidence that message customization via microtargeting or cognitive elaboration through interaction with the AI have much more persuasive effect than a single generic message. These findings have implications for different theories of persuasion, which we discuss. Finally, we find that although persuasive messages are able to moderate some people's attitudes, they have inconsistent and weaker effects on the democratic reciprocity people grant to their political opponents. This suggests that attitude moderation (ideological depolarization) does not necessarily lead to increased democratic tolerance or decreased affective polarization.

摘要

尽管政治说服的社会和心理过程对社会很重要且经过了数十年研究,但关键问题仍未得到解答,这通常是由于数据限制。我们提出,人工智能工具,特别是生成式大语言模型(LLMs),可用于解决这些限制,在政治说服研究中具有重要优势。在两项预先注册的在线调查实验中,我们展示了生成式人工智能作为研究说服工具的潜力,并提供了关于导致说服力增强的心理和传播过程的重要见解。具体而言,我们测试了四种人工智能生成的反态度说服策略,旨在测试包含定制(根据接收者的个人特征和信念编写信息)和阐述(通过互动增强对论点的心理参与)的信息的有效性。我们发现,相对于对照组,所有四种类型的说服性人工智能都产生了显著的态度变化,并改变了对支持与处理方式一致观点的候选人的投票支持。然而,我们没有发现证据表明通过微观定位进行信息定制或通过与人工智能互动进行认知阐述比单一通用信息具有更强的说服效果。这些发现对不同的说服理论有影响,我们将对此进行讨论。最后,我们发现,尽管说服性信息能够调节一些人的态度,但它们对人们给予政治对手的民主互惠的影响并不一致且较弱。这表明态度调节(意识形态去极化)不一定会导致民主宽容的增加或情感极化的减少。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b874/12067286/bdfd18a3b60f/pnas.2412815122fig01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验