Suppr超能文献

对人工智能关于道德违规判定的直观判断。

Intuitive judgements towards artificial intelligence verdicts of moral transgressions.

作者信息

Liu Yuxin, Moore Adam

机构信息

School of Philosophy, Psychology and Language Sciences, The University of Edinburgh, Edinburgh, UK.

Centre for Technomoral Futures, Edinburgh Futures Institute, The University of Edinburgh, Edinburgh, UK.

出版信息

Br J Soc Psychol. 2025 Jul;64(3):e12908. doi: 10.1111/bjso.12908.

Abstract

Automated decision-making systems have become increasingly prevalent in morally salient domains of services, introducing ethically significant consequences. In three pre-registered studies (N = 804), we experimentally investigated whether people's judgements of AI decisions are impacted by a belief alignment with the underlying politically salient context of AI deployment over and above any general attitudes towards AI people might hold. Participants read conservative- or liberal-framed vignettes of AI-detected statistical anomalies as a proxy for potential human prejudice in the contexts of LGBTQ+ rights and environmental protection, and responded to willingness to act on the AI verdicts, trust in AI, and perception of procedural fairness and distributive fairness of AI. Our results reveal that people's willingness to act, and judgements of trust and fairness seem to be constructed as a function of general attitudes of positivity towards AI, the moral intuitive context of AI deployment, pre-existing politico-moral beliefs, and a compatibility between the latter two. The implication is that judgements towards AI are shaped by both the belief alignment effect and general AI attitudes, suggesting a level of malleability and context dependency that challenges the potential role of AI serving as an effective mediator in morally complex situations.

摘要

自动化决策系统在具有道德重要性的服务领域中越来越普遍,带来了具有伦理意义的后果。在三项预先注册的研究(N = 804)中,我们通过实验研究了人们对人工智能决策的判断是否受到与人工智能部署背后政治上突出的背景的信念一致性的影响,这种影响超过了人们对人工智能可能持有的任何一般态度。参与者阅读了以保守或自由框架呈现的人工智能检测到的统计异常的短文,以此作为在 LGBTQ+ 权利和环境保护背景下潜在人类偏见的代理,并对根据人工智能裁决采取行动的意愿、对人工智能的信任以及对人工智能程序公平性和分配公平性的看法做出回应。我们的结果表明,人们采取行动的意愿以及对信任和公平的判断似乎是由对人工智能的总体积极态度、人工智能部署的道德直观背景、先前存在的政治道德信念以及后两者之间的兼容性所构成的。这意味着对人工智能的判断受到信念一致性效应和对人工智能的总体态度的影响,表明存在一定程度的可塑性和情境依赖性,这对人工智能在道德复杂情况下作为有效调解者的潜在作用提出了挑战。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/83d7c2aad38a/BJSO-64-0-g002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验