Suppr超能文献

人们期望人工道德顾问更加功利主义,却不信任功利主义的道德顾问。

People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.

作者信息

Myers Simon, Everett Jim A C

机构信息

Behavioural Science Group, Warwick Business School, University of Warwick, Scarman Rd, Coventry CV4 7AL, UK; School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK.

School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK.

出版信息

Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028. Epub 2024 Dec 12.

Abstract

As machines powered by artificial intelligence increase in their technological capacities, there is a growing interest in the theoretical and practical idea of artificial moral advisors (AMAs): systems powered by artificial intelligence that are explicitly designed to assist humans in making ethical decisions. Across four pre-registered studies (total N = 2604) we investigated how people perceive and trust artificial moral advisors compared to human advisors. Extending previous work on algorithmic aversion, we show that people have a significant aversion to AMAs (vs humans) giving moral advice, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles. We find that participants expect AI to make utilitarian decisions, and that even when participants agreed with a decision made by an AMA, they still expected to disagree with an AMA more than a human in future. Our findings suggest challenges in the adoption of artificial moral advisors, and particularly those who draw on and endorse utilitarian principles - however normatively justifiable.

摘要

随着由人工智能驱动的机器技术能力不断提高,人们对人工道德顾问(AMA)的理论和实践理念的兴趣与日俱增:人工道德顾问是指由人工智能驱动的系统,其明确设计目的是协助人类做出道德决策。在四项预先注册的研究(总样本量N = 2604)中,我们调查了与人类顾问相比,人们如何看待和信任人工道德顾问。在先前关于算法厌恶的研究基础上进行拓展,我们发现人们对人工道德顾问(与人类相比)给出道德建议存在显著的厌恶情绪,同时还表明,当顾问——无论是人类还是人工智能——基于功利主义原则给出建议时,这种情况尤为明显。我们发现参与者期望人工智能做出功利主义决策,并且即使参与者同意人工道德顾问做出的决策,他们在未来仍然预计会比人类顾问更不同意人工道德顾问的意见。我们的研究结果表明,在采用人工道德顾问方面存在挑战,尤其是那些借鉴并支持功利主义原则的人工道德顾问——无论其在规范上多么合理。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验