• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人们期望人工道德顾问更加功利主义,却不信任功利主义的道德顾问。

People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.

作者信息

Myers Simon, Everett Jim A C

机构信息

Behavioural Science Group, Warwick Business School, University of Warwick, Scarman Rd, Coventry CV4 7AL, UK; School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK.

School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK.

出版信息

Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028. Epub 2024 Dec 12.

DOI:10.1016/j.cognition.2024.106028
PMID:39671980
Abstract

As machines powered by artificial intelligence increase in their technological capacities, there is a growing interest in the theoretical and practical idea of artificial moral advisors (AMAs): systems powered by artificial intelligence that are explicitly designed to assist humans in making ethical decisions. Across four pre-registered studies (total N = 2604) we investigated how people perceive and trust artificial moral advisors compared to human advisors. Extending previous work on algorithmic aversion, we show that people have a significant aversion to AMAs (vs humans) giving moral advice, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles. We find that participants expect AI to make utilitarian decisions, and that even when participants agreed with a decision made by an AMA, they still expected to disagree with an AMA more than a human in future. Our findings suggest challenges in the adoption of artificial moral advisors, and particularly those who draw on and endorse utilitarian principles - however normatively justifiable.

摘要

随着由人工智能驱动的机器技术能力不断提高,人们对人工道德顾问(AMA)的理论和实践理念的兴趣与日俱增:人工道德顾问是指由人工智能驱动的系统,其明确设计目的是协助人类做出道德决策。在四项预先注册的研究(总样本量N = 2604)中,我们调查了与人类顾问相比,人们如何看待和信任人工道德顾问。在先前关于算法厌恶的研究基础上进行拓展,我们发现人们对人工道德顾问(与人类相比)给出道德建议存在显著的厌恶情绪,同时还表明,当顾问——无论是人类还是人工智能——基于功利主义原则给出建议时,这种情况尤为明显。我们发现参与者期望人工智能做出功利主义决策,并且即使参与者同意人工道德顾问做出的决策,他们在未来仍然预计会比人类顾问更不同意人工道德顾问的意见。我们的研究结果表明,在采用人工道德顾问方面存在挑战,尤其是那些借鉴并支持功利主义原则的人工道德顾问——无论其在规范上多么合理。

相似文献

1
People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.人们期望人工道德顾问更加功利主义,却不信任功利主义的道德顾问。
Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028. Epub 2024 Dec 12.
2
Moral hypocrisy on the basis of construal level: to be a utilitarian personal decision maker or to be a moral advisor?基于解释水平的道德伪善:成为功利主义的个人决策者还是道德顾问?
PLoS One. 2015 Feb 17;10(2):e0117540. doi: 10.1371/journal.pone.0117540. eCollection 2015.
3
Switching Away from Utilitarianism: The Limited Role of Utility Calculations in Moral Judgment.摒弃功利主义:效用计算在道德判断中的有限作用
PLoS One. 2016 Aug 9;11(8):e0160084. doi: 10.1371/journal.pone.0160084. eCollection 2016.
4
The mismeasure of morals: antisocial personality traits predict utilitarian responses to moral dilemmas.道德的误测:反社会人格特质预示着功利主义对道德困境的反应。
Cognition. 2011 Oct;121(1):154-61. doi: 10.1016/j.cognition.2011.05.010. Epub 2011 Jul 16.
5
Deontological and utilitarian inclinations in moral decision making: a process dissociation approach.道德决策中的道义论和功利主义倾向:一种过程分离方法。
J Pers Soc Psychol. 2013 Feb;104(2):216-35. doi: 10.1037/a0031021. Epub 2012 Dec 31.
6
People are averse to machines making moral decisions.人们反对机器做出道德决策。
Cognition. 2018 Dec;181:21-34. doi: 10.1016/j.cognition.2018.08.003. Epub 2018 Aug 11.
7
On the uneasy alliance between moral bioenhancement and utilitarianism.论道德生物增强与功利主义之间的不安联盟。
Bioethics. 2022 Feb;36(2):210-217. doi: 10.1111/bioe.12974. Epub 2021 Nov 19.
8
Moral-dilemma judgments by individuals and groups: Are many heads really more utilitarian than one?个人与群体的道德困境判断:人多真的比人少更讲功利主义吗?
Cognition. 2025 Mar;256:106053. doi: 10.1016/j.cognition.2024.106053. Epub 2024 Dec 24.
9
Dopamine, religiosity, and utilitarian moral judgment.多巴胺、宗教信仰和功利主义道德判断。
Soc Neurosci. 2021 Dec;16(6):627-638. doi: 10.1080/17470919.2021.1974935. Epub 2021 Sep 2.
10
Morality on the road: Should machine drivers be more utilitarian than human drivers?道路上的道德:机器驾驶员是否应该比人类驾驶员更功利?
Cognition. 2025 Jan;254:106011. doi: 10.1016/j.cognition.2024.106011. Epub 2024 Nov 19.