• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对人工智能关于道德违规判定的直观判断。

Intuitive judgements towards artificial intelligence verdicts of moral transgressions.

作者信息

Liu Yuxin, Moore Adam

机构信息

School of Philosophy, Psychology and Language Sciences, The University of Edinburgh, Edinburgh, UK.

Centre for Technomoral Futures, Edinburgh Futures Institute, The University of Edinburgh, Edinburgh, UK.

出版信息

Br J Soc Psychol. 2025 Jul;64(3):e12908. doi: 10.1111/bjso.12908.

DOI:10.1111/bjso.12908
PMID:40448478
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12125647/
Abstract

Automated decision-making systems have become increasingly prevalent in morally salient domains of services, introducing ethically significant consequences. In three pre-registered studies (N = 804), we experimentally investigated whether people's judgements of AI decisions are impacted by a belief alignment with the underlying politically salient context of AI deployment over and above any general attitudes towards AI people might hold. Participants read conservative- or liberal-framed vignettes of AI-detected statistical anomalies as a proxy for potential human prejudice in the contexts of LGBTQ+ rights and environmental protection, and responded to willingness to act on the AI verdicts, trust in AI, and perception of procedural fairness and distributive fairness of AI. Our results reveal that people's willingness to act, and judgements of trust and fairness seem to be constructed as a function of general attitudes of positivity towards AI, the moral intuitive context of AI deployment, pre-existing politico-moral beliefs, and a compatibility between the latter two. The implication is that judgements towards AI are shaped by both the belief alignment effect and general AI attitudes, suggesting a level of malleability and context dependency that challenges the potential role of AI serving as an effective mediator in morally complex situations.

摘要

自动化决策系统在具有道德重要性的服务领域中越来越普遍,带来了具有伦理意义的后果。在三项预先注册的研究(N = 804)中,我们通过实验研究了人们对人工智能决策的判断是否受到与人工智能部署背后政治上突出的背景的信念一致性的影响,这种影响超过了人们对人工智能可能持有的任何一般态度。参与者阅读了以保守或自由框架呈现的人工智能检测到的统计异常的短文,以此作为在 LGBTQ+ 权利和环境保护背景下潜在人类偏见的代理,并对根据人工智能裁决采取行动的意愿、对人工智能的信任以及对人工智能程序公平性和分配公平性的看法做出回应。我们的结果表明,人们采取行动的意愿以及对信任和公平的判断似乎是由对人工智能的总体积极态度、人工智能部署的道德直观背景、先前存在的政治道德信念以及后两者之间的兼容性所构成的。这意味着对人工智能的判断受到信念一致性效应和对人工智能的总体态度的影响,表明存在一定程度的可塑性和情境依赖性,这对人工智能在道德复杂情况下作为有效调解者的潜在作用提出了挑战。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/bc1dddfb798d/BJSO-64-0-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/83d7c2aad38a/BJSO-64-0-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/8c190d5eab3e/BJSO-64-0-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/bc1dddfb798d/BJSO-64-0-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/83d7c2aad38a/BJSO-64-0-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/8c190d5eab3e/BJSO-64-0-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c86/12125647/bc1dddfb798d/BJSO-64-0-g001.jpg

相似文献

1
Intuitive judgements towards artificial intelligence verdicts of moral transgressions.对人工智能关于道德违规判定的直观判断。
Br J Soc Psychol. 2025 Jul;64(3):e12908. doi: 10.1111/bjso.12908.
2
Influence of AI behavior on human moral decisions, agency, and responsibility.人工智能行为对人类道德决策、能动性和责任的影响。
Sci Rep. 2025 Apr 10;15(1):12329. doi: 10.1038/s41598-025-95587-6.
3
Psychological and Brain Responses to Artificial Intelligence's Violation of Community Ethics.人工智能违反社区伦理时的心理和大脑反应。
Cyberpsychol Behav Soc Netw. 2024 Aug;27(8):562-570. doi: 10.1089/cyber.2023.0524. Epub 2024 May 17.
4
Human's moral judgements towards different social actors: A cross-sectional study.人类对不同社会行为者的道德判断:一项横断面研究。
Br J Dev Psychol. 2023 Nov;41(4):343-357. doi: 10.1111/bjdp.12460. Epub 2023 Aug 8.
5
The effects of explicit reasoning on moral judgements.明确推理对道德判断的影响。
Q J Exp Psychol (Hove). 2024 Apr;77(4):828-845. doi: 10.1177/17470218231179685. Epub 2023 Jun 14.
6
Moral foundations and political attitudes: The moderating role of political sophistication.道德基础与政治态度:政治成熟度的调节作用。
Int J Psychol. 2016 Aug;51(4):252-60. doi: 10.1002/ijop.12158. Epub 2015 Feb 26.
7
Inconsistent advice by ChatGPT influences decision making in various areas.ChatGPT 提供的不一致建议会影响各个领域的决策。
Sci Rep. 2024 Jul 10;14(1):15876. doi: 10.1038/s41598-024-66821-4.
8
People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.人们期望人工道德顾问更加功利主义,却不信任功利主义的道德顾问。
Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028. Epub 2024 Dec 12.
9
AI language model rivals expert ethicist in perceived moral expertise.在被感知的道德专业知识方面,人工智能语言模型可与专家伦理学家相媲美。
Sci Rep. 2025 Feb 3;15(1):4084. doi: 10.1038/s41598-025-86510-0.
10
When does "no" mean no? Insights from sex robots.当“不”意味着“不”时?从性爱机器人中得到的启示。
Cognition. 2024 Mar;244:105687. doi: 10.1016/j.cognition.2023.105687. Epub 2023 Dec 27.

本文引用的文献

1
How human-AI feedback loops alter human perceptual, emotional and social judgements.人类与人工智能的反馈循环如何改变人类的感知、情感和社会判断。
Nat Hum Behav. 2025 Feb;9(2):345-359. doi: 10.1038/s41562-024-02077-2. Epub 2024 Dec 18.
2
Durably reducing conspiracy beliefs through dialogues with AI.通过与人工智能对话持久地减少阴谋论信念。
Science. 2024 Sep 13;385(6714):eadq1814. doi: 10.1126/science.adq1814.
3
Find the Gap: AI, Responsible Agency and Vulnerability.寻找差距:人工智能、责任机构与脆弱性
Minds Mach (Dordr). 2024;34(3):20. doi: 10.1007/s11023-024-09674-0. Epub 2024 Jun 5.
4
Naïve information aggregation in human social learning.人类社会学习中的天真信息聚合。
Cognition. 2024 Jan;242:105633. doi: 10.1016/j.cognition.2023.105633. Epub 2023 Oct 26.
5
The Moral Psychology of Artificial Intelligence.人工智能的道德心理学。
Annu Rev Psychol. 2024 Jan 18;75:653-675. doi: 10.1146/annurev-psych-030123-113559. Epub 2023 Sep 18.
6
How AI can distort human beliefs.人工智能如何扭曲人类的信仰。
Science. 2023 Jun 23;380(6651):1222-1223. doi: 10.1126/science.adi0248. Epub 2023 Jun 22.
7
Preference for human, not algorithm aversion.偏好人为,而非算法厌恶。
Trends Cogn Sci. 2022 Oct;26(10):824-826. doi: 10.1016/j.tics.2022.07.007. Epub 2022 Aug 5.
8
The emergence and perils of polarization.极化的出现及其危险。
Proc Natl Acad Sci U S A. 2021 Dec 14;118(50). doi: 10.1073/pnas.2116863118.
9
Data quality of platforms and panels for online behavioral research.在线行为研究的平台和面板的数据质量。
Behav Res Methods. 2022 Aug;54(4):1643-1662. doi: 10.3758/s13428-021-01694-3. Epub 2021 Sep 29.
10
Initial validation of the general attitudes towards Artificial Intelligence Scale.人工智能态度量表总体态度的初步验证。
Comput Hum Behav Rep. 2020 Jan-Jul;1:100014. doi: 10.1016/j.chbr.2020.100014. Epub 2020 May 18.