Altay Sacha, Gilardi Fabrizio
Department of Political Science, University of Zurich, 8050 Zürich, Switzerland.
PNAS Nexus. 2024 Oct 1;3(10):pgae403. doi: 10.1093/pnasnexus/pgae403. eCollection 2024 Oct.
The rise of generative AI tools has sparked debates about the labeling of AI-generated content. Yet, the impact of such labels remains uncertain. In two preregistered online experiments among US and UK participants ( = 4,976), we show that while participants did not equate "AI-generated" with "False," labeling headlines as AI-generated lowered their perceived accuracy and participants' willingness to share them, regardless of whether the headlines were true or false, and created by humans or AI. The impact of labeling headlines as AI-generated was three times smaller than labeling them as false. This AI aversion is due to expectations that headlines labeled as AI-generated have been entirely written by AI with no human supervision. These findings suggest that the labeling of AI-generated content should be approached cautiously to avoid unintended negative effects on harmless or even beneficial AI-generated content and that effective deployment of labels requires transparency regarding their meaning.
生成式人工智能工具的兴起引发了关于人工智能生成内容标注的争论。然而,此类标注的影响仍不明确。在针对美国和英国参与者开展的两项预注册在线实验中( = 4,976),我们发现,虽然参与者并不将“人工智能生成”等同于“虚假”,但将标题标注为人工智能生成会降低其感知准确性以及参与者分享这些标题的意愿,无论标题是真是假,也无论其由人类还是人工智能创作。将标题标注为人工智能生成的影响比标注为虚假要小三倍。这种对人工智能的厌恶源于一种预期,即被标注为人工智能生成的标题是在没有人类监督的情况下完全由人工智能撰写的。这些发现表明,对于人工智能生成内容的标注应谨慎对待,以避免对无害甚至有益的人工智能生成内容产生意外的负面影响,并且标签若要有效部署,需要在其含义方面保持透明。