Suppr超能文献

施事性语言框架会影响对人工智能及其创造者的责任归属。

Agentive linguistic framing affects responsibility assignments toward AIs and their creators.

作者信息

Petersen Dawson, Almor Amit

机构信息

Linguistics Program, University of South Carolina, Columbia, SC, United States.

Department of Psychology, University of South Carolina, Columbia, SC, United States.

出版信息

Front Psychol. 2025 May 7;16:1498958. doi: 10.3389/fpsyg.2025.1498958. eCollection 2025.

Abstract

Tech companies often use agentive language to describe their AIs (e.g., The Google Blog claims that, "Gemini can understand, explain and generate high-quality code,"). Psycholinguistic research has shown that violating animacy hierarchies by putting a nonhuman in this agentive subject position (i.e., grammatical metaphor) influences readers to perceive it as a causal agent. However, it is not yet known how this affects readers' responsibility assignments toward AIs or the companies that make them. Furthermore, it is not known whether this effect relies on psychological anthropomorphism, or a more limited set of linguistic causal schemas. We investigated these questions by having participants read a short vignette in which "Dr. AI" gave dangerous health advice in one of two framing conditions (AI as Agent vs. AI as Instrument). Participants then rated how responsible the AI, the company, and the patients were for the outcome, and their own AI experience. We predicted that participants would assign more responsibility to the AI in the Agent condition, and that lower AI experience participants would assign higher responsibility to the AI because they would be more likely to anthropomorphize it. The results confirmed these predictions; we found an interaction between linguistic framing condition and AI experience such that lower AI experience participants assigned higher responsibility to the AI in the Agent condition than in the Instrument condition ( = 2.13,  = 0.032) while higher AI experience participants did not. Our findings suggest that the effects of agentive linguistic framing toward non-humans are decreased by domain experience because it decreases anthropomorphism.

摘要

科技公司经常使用具有施事性的语言来描述他们的人工智能(例如,谷歌博客声称,“Gemini能够理解、解释并生成高质量代码”)。心理语言学研究表明,将非人类置于这种施事性主语位置(即语法隐喻)从而违反生命性等级会影响读者将其视为因果施事者。然而,目前尚不清楚这如何影响读者对人工智能或制造它们的公司的责任归因。此外,尚不清楚这种影响是依赖于心理拟人化,还是更有限的一组语言因果图式。我们通过让参与者阅读一篇简短的短文来研究这些问题,在短文中“人工智能博士”在两种框架条件之一(人工智能作为施事者与人工智能作为工具)下给出了危险的健康建议。然后,参与者对人工智能、公司和患者对结果的责任程度以及他们自己的人工智能体验进行评分。我们预测,在施事者条件下,参与者会将更多责任归咎于人工智能,并且人工智能体验较少的参与者会将更高的责任归咎于人工智能,因为他们更有可能将其拟人化。结果证实了这些预测;我们发现语言框架条件和人工智能体验之间存在交互作用,即人工智能体验较少的参与者在施事者条件下比在工具条件下将更高的责任归咎于人工智能(=2.13,=0.032),而人工智能体验较多的参与者则没有。我们的研究结果表明,领域经验会降低对非人类的施事性语言框架的影响,因为它会减少拟人化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4cd1/12092423/31a0ac18430c/fpsyg-16-1498958-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验