Suppr超能文献

大语言模型生成的信息能够在政策问题上说服人类。

LLM-generated messages can persuade humans on policy issues.

作者信息

Bai Hui, Voelkel Jan G, Muldowney Shane, Eichstaedt Johannes C, Willer Robb

机构信息

Politics and Social Change Lab, Stanford University, Stanford, CA, USA.

Political Belief Lab, Minnetonka, MN, USA.

出版信息

Nat Commun. 2025 Jul 1;16(1):6037. doi: 10.1038/s41467-025-61345-5.

Abstract

The emergence of large language models (LLMs) has made it possible for generative artificial intelligence (AI) to tackle many higher-order cognitive tasks, with critical implications for industry, government, and labor markets. Here, we investigate whether existing, openly-available LLMs can be used to create messages capable of influencing humans' political attitudes. Across three pre-registered experiments (total N = 4829), participants who read persuasive messages generated by LLMs showed significantly more attitude change across a range of policies - including polarized policies, like an assault weapons ban, a carbon tax, and a paid parental-leave program - relative to control condition participants who read a neutral message. Overall, LLM-generated messages were similarly effective in influencing policy attitudes as messages crafted by lay humans. Participants' reported perceptions of the authors of the persuasive messages suggest these effects occurred through somewhat distinct causal pathways. While the persuasiveness of LLM-generated messages was associated with perceptions that the author used more facts, evidence, logical reasoning, and a dispassionate voice, the persuasiveness of human-generated messages was associated with perceptions of the author as unique and original. These results demonstrate that recent developments in AI make it possible to create politically persuasive messages quickly, cheaply, and at massive scale.

摘要

大语言模型(LLMs)的出现使生成式人工智能(AI)能够处理许多高阶认知任务,这对工业、政府和劳动力市场具有关键影响。在此,我们研究现有的公开可用大语言模型是否可用于创建能够影响人类政治态度的信息。在三项预先注册的实验(总样本量N = 4829)中,阅读由大语言模型生成的说服性信息的参与者,相对于阅读中性信息的对照条件参与者,在一系列政策上表现出显著更多的态度变化——这些政策包括两极分化的政策,如攻击性武器禁令、碳税和带薪育儿假计划。总体而言,大语言模型生成的信息在影响政策态度方面与普通人精心撰写的信息同样有效。参与者对说服性信息作者的报告感知表明,这些影响通过 somewhat distinct causal pathways发生。虽然大语言模型生成的信息的说服力与对作者使用更多事实、证据、逻辑推理和冷静声音的感知相关,但人类生成的信息的说服力与对作者独特和原创的感知相关。这些结果表明,人工智能的最新发展使得能够快速、廉价且大规模地创建具有政治说服力的信息。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验