Goldstein Josh A, Chao Jason, Grossman Shelby, Stamos Alex, Tomz Michael
Center for Security and Emerging Technology, Georgetown University, Washington, DC 20001, USA.
Stanford Internet Observatory, Stanford University, Stanford, CA 94305, USA.
PNAS Nexus. 2024 Feb 20;3(2):pgae034. doi: 10.1093/pnasnexus/pgae034. eCollection 2024 Feb.
Can large language models, a form of artificial intelligence (AI), generate persuasive propaganda? We conducted a preregistered survey experiment of US respondents to investigate the persuasiveness of news articles written by foreign propagandists compared to content generated by GPT-3 davinci (a large language model). We found that GPT-3 can create highly persuasive text as measured by participants' agreement with propaganda theses. We further investigated whether a person fluent in English could improve propaganda persuasiveness. Editing the prompt fed to GPT-3 and/or curating GPT-3's output made GPT-3 even more persuasive, and, under certain conditions, as persuasive as the original propaganda. Our findings suggest that propagandists could use AI to create convincing content with limited effort.
作为人工智能(AI)一种形式的大语言模型能生成具有说服力的宣传内容吗?我们对美国受访者进行了一项预先注册的调查实验,以研究与GPT-3 davinci(一种大语言模型)生成的内容相比,外国宣传人员撰写的新闻文章的说服力。我们发现,根据参与者对宣传论点的认同程度来衡量,GPT-3能够创建极具说服力的文本。我们进一步研究了一位精通英语的人是否能提高宣传的说服力。编辑输入给GPT-3的提示和/或策划GPT-3的输出会使GPT-3更具说服力,并且在某些情况下,其说服力与原始宣传内容相当。我们的研究结果表明,宣传人员可以利用人工智能轻松创建令人信服的内容。