Suppr超能文献

人们对自己使用与他人使用人工智能介导的交流工具有着不同的期望。

People have different expectations for their own versus others' use of AI-mediated communication tools.

作者信息

Purcell Zoe A, Dong Mengchen, Nussberger Anne-Marie, Köbis Nils, Jakesch Maurice

机构信息

LaPsyDÉ, Université Paris Cité, CNRS, Paris, France.

Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.

出版信息

Br J Psychol. 2024 Sep 4. doi: 10.1111/bjop.12727.

Abstract

Artificial intelligence (AI) can enhance human communication, for example, by improving the quality of our writing, voice or appearance. However, AI mediated communication also has risks-it may increase deception, compromise authenticity or yield widespread mistrust. As a result, both policymakers and technology firms are developing approaches to prevent and reduce potentially unacceptable uses of AI communication technologies. However, we do not yet know what people believe is acceptable or what their expectations are regarding usage. Drawing on normative psychology theories, we examine people's judgements of the acceptability of open and secret AI use, as well as people's expectations of their own and others' use. In two studies with representative samples (Study 1: N = 477; Study 2: N = 765), we find that people are less accepting of secret than open AI use in communication, but only when directly compared. Our results also suggest that people believe others will use AI communication tools more than they would themselves and that people do not expect others' use to align with their expectations of what is acceptable. While much attention has been focused on transparency measures, our results suggest that self-other differences are a central factor for understanding people's attitudes and expectations for AI-mediated communication.

摘要

人工智能(AI)可以增强人际交流,例如,通过提高我们写作、语音或形象的质量。然而,人工智能介导的交流也存在风险——它可能会增加欺骗行为、损害真实性或引发广泛的不信任。因此,政策制定者和科技公司都在制定方法,以防止和减少人工智能通信技术潜在的不可接受的使用。然而,我们尚不清楚人们认为什么是可以接受的,或者他们对使用的期望是什么。借鉴规范心理学理论,我们研究了人们对公开和秘密使用人工智能的可接受性的判断,以及人们对自己和他人使用人工智能的期望。在两项具有代表性样本的研究中(研究1:N = 477;研究2:N = 765),我们发现,在交流中,人们对秘密使用人工智能的接受程度低于公开使用,但只有在直接比较时才会如此。我们的研究结果还表明,人们认为他人使用人工智能通信工具的频率会高于自己,而且人们并不期望他人的使用符合他们对可接受事物的期望。虽然人们大多关注透明度措施,但我们的研究结果表明,自我与他人的差异是理解人们对人工智能介导交流的态度和期望的核心因素。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验