Lu Louise, Tormala Zakary L, Duhachek Adam
Stanford University, Stanford, CA, USA.
University of Illinois Chicago, Chicago, IL, USA.
Sci Rep. 2025 May 17;15(1):17170. doi: 10.1038/s41598-025-00791-z.
Exposure to counterattitudinal information has been shown to yield mixed effects on attitude polarization. The current research explores the differential impact of such information when generated by artificial intelligence (AI) versus human sources. While prior work highlights a general aversion to AI for decision-making, our research reveals a consistent openness to AI in the context of counterattitudinal messages. Across four pre-registered studies (N = 2061), we find that when people receive counterattitudinal messages on potentially polarizing issues, AI sources are perceived as less biased, more informative, and having less persuasive intent than human sources. This leads to greater receptiveness to counterattitudinal messages when those messages come from AI rather than human sources. In addition, we find preliminary evidence that receiving counterattitudinal messages from an AI (versus human) source can diminish outgroup animosity and facilitate attitude change.
接触与自身态度相悖的信息已被证明会对态度两极分化产生复杂的影响。当前的研究探讨了由人工智能(AI)生成的此类信息与人类来源的信息所产生的不同影响。虽然先前的研究强调了人们普遍厌恶在决策中使用人工智能,但我们的研究表明,在面对与自身态度相悖的信息时,人们对人工智能始终持开放态度。在四项预先注册的研究(N = 2061)中,我们发现,当人们收到关于可能导致两极分化问题的与自身态度相悖的信息时,与人类来源相比,人工智能来源被认为偏见更少、信息更丰富且说服意图更弱。这使得当这些信息来自人工智能而非人类时,人们对与自身态度相悖的信息的接受度更高。此外,我们发现初步证据表明,从人工智能(而非人类)来源接收与自身态度相悖的信息可以减少对外群体的敌意并促进态度改变。