• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能无法被“迷惑”:公正性对普通大众算法偏好的影响。

Artificial Intelligence Can't Be Charmed: The Effects of Impartiality on Laypeople's Algorithmic Preferences.

作者信息

Claudy Marius C, Aquino Karl, Graso Maja

机构信息

College of Business, University College Dublin, Dublin, Ireland.

Sauder School of Business, University of British Columbia, Vancouver, BC, Canada.

出版信息

Front Psychol. 2022 Jun 29;13:898027. doi: 10.3389/fpsyg.2022.898027. eCollection 2022.

DOI:10.3389/fpsyg.2022.898027
PMID:35846643
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9277554/
Abstract

Over the coming years, AI could increasingly replace humans for making complex decisions because of the promise it holds for standardizing and debiasing decision-making procedures. Despite intense debates regarding algorithmic fairness, little research has examined how laypeople react when resource-allocation decisions are turned over to AI. We address this question by examining the role of perceived impartiality as a factor that can influence the acceptance of AI as a replacement for human decision-makers. We posit that laypeople attribute greater impartiality to AI than human decision-makers. Our investigation shows that people value impartiality in decision procedures that concern the allocation of scarce resources and that people perceive AI as more capable of impartiality than humans. Yet, paradoxically, laypeople prefer human decision-makers in allocation decisions. This preference reverses when potential human biases are made salient. The findings highlight the importance of impartiality in AI and thus hold implications for the design of policy measures.

摘要

在未来几年,由于人工智能有望使决策程序标准化并消除偏差,它可能会越来越多地取代人类做出复杂决策。尽管围绕算法公平性存在激烈辩论,但很少有研究探讨当资源分配决策交给人工智能时,普通民众会作何反应。我们通过研究感知公正性作为一种能够影响人们接受人工智能取代人类决策者的因素所起的作用来解决这个问题。我们假定,普通民众认为人工智能比人类决策者更公正。我们的调查表明,人们重视涉及稀缺资源分配的决策程序中的公正性,并且人们认为人工智能比人类更有能力做到公正。然而,矛盾的是,普通民众在分配决策中更喜欢人类决策者。当潜在的人类偏见变得明显时,这种偏好就会逆转。这些发现凸显了公正性在人工智能中的重要性,因此对政策措施的设计具有启示意义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4104/9277554/fc113141a083/fpsyg-13-898027-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4104/9277554/25a0e83dea9e/fpsyg-13-898027-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4104/9277554/fc113141a083/fpsyg-13-898027-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4104/9277554/25a0e83dea9e/fpsyg-13-898027-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4104/9277554/fc113141a083/fpsyg-13-898027-g002.jpg

相似文献

1
Artificial Intelligence Can't Be Charmed: The Effects of Impartiality on Laypeople's Algorithmic Preferences.人工智能无法被“迷惑”:公正性对普通大众算法偏好的影响。
Front Psychol. 2022 Jun 29;13:898027. doi: 10.3389/fpsyg.2022.898027. eCollection 2022.
2
Who should be first? How and when AI-human order influences procedural justice in a multistage decision-making process.谁应排在首位?人工智能与人的顺序如何以及何时影响多阶段决策过程中的程序正义。
PLoS One. 2023 Jul 17;18(7):e0284840. doi: 10.1371/journal.pone.0284840. eCollection 2023.
3
Should AI allocate livers for transplant? Public attitudes and ethical considerations.人工智能是否应该分配肝脏进行移植?公众态度和伦理考虑。
BMC Med Ethics. 2023 Nov 27;24(1):102. doi: 10.1186/s12910-023-00983-0.
4
Inequality threat increases laypeople's, but not judges', acceptance of algorithmic decision making in court.不平等威胁增加了外行对法庭上算法决策的接受度,但法官并非如此。
Law Hum Behav. 2024 Oct-Dec;48(5-6):441-455. doi: 10.1037/lhb0000577. Epub 2024 Sep 12.
5
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
6
From fair predictions to just decisions? Conceptualizing algorithmic fairness and distributive justice in the context of data-driven decision-making.从合理预测到公正决策?在数据驱动决策背景下对算法公平性和分配正义进行概念化
Front Sociol. 2022 Oct 10;7:883999. doi: 10.3389/fsoc.2022.883999. eCollection 2022.
7
Hammer or Measuring Tape? Artificial Intelligence and Justice in Healthcare.锤子还是卷尺?医疗保健中的人工智能与公平性
Camb Q Healthc Ethics. 2023 May 16:1-12. doi: 10.1017/S0963180123000257.
8
Exploring the role of AI algorithmic agents: The impact of algorithmic decision autonomy on consumer purchase decisions.探索人工智能算法主体的作用:算法决策自主性对消费者购买决策的影响。
Front Psychol. 2022 Oct 20;13:1009173. doi: 10.3389/fpsyg.2022.1009173. eCollection 2022.
9
Ethical machines: The human-centric use of artificial intelligence.合乎伦理的机器:以人类为中心的人工智能应用
iScience. 2021 Mar 3;24(3):102249. doi: 10.1016/j.isci.2021.102249. eCollection 2021 Mar 19.
10
New and emerging technology for adult social care - the example of home sensors with artificial intelligence (AI) technology.成人社会关怀新技术——以具有人工智能 (AI) 技术的家庭传感器为例。
Health Soc Care Deliv Res. 2023 Jun;11(9):1-64. doi: 10.3310/HRYW4281.

引用本文的文献

1
Artificial intelligence and illusions of understanding in scientific research.人工智能与科研中的理解错觉。
Nature. 2024 Mar;627(8002):49-58. doi: 10.1038/s41586-024-07146-0. Epub 2024 Mar 6.
2
Humans inherit artificial intelligence biases.人类继承了人工智能偏差。
Sci Rep. 2023 Oct 3;13(1):15737. doi: 10.1038/s41598-023-42384-8.
3
Explainable AI as evidence of fair decisions.可解释人工智能作为公平决策的证据。

本文引用的文献

1
Power and decision making: new directions for research in the age of artificial intelligence.权力与决策:人工智能时代研究的新方向。
Curr Opin Psychol. 2020 Jun;33:172-176. doi: 10.1016/j.copsyc.2019.07.039. Epub 2019 Aug 1.
2
Psychological reactions to human versus robotic job replacement.对人类和机器人工作替代的心理反应。
Nat Hum Behav. 2019 Oct;3(10):1062-1069. doi: 10.1038/s41562-019-0670-y. Epub 2019 Aug 5.
3
Artificial intelligence can improve decision-making in infection management.人工智能可以改善感染管理中的决策制定。
Front Psychol. 2023 Feb 14;14:1069426. doi: 10.3389/fpsyg.2023.1069426. eCollection 2023.
Nat Hum Behav. 2019 Jun;3(6):543-545. doi: 10.1038/s41562-019-0583-9.
4
Machine behaviour.机器行为。
Nature. 2019 Apr;568(7753):477-486. doi: 10.1038/s41586-019-1138-y. Epub 2019 Apr 24.
5
Holding Robots Responsible: The Elements of Machine Morality.《让机器人负责:机器道德的要素》
Trends Cogn Sci. 2019 May;23(5):365-368. doi: 10.1016/j.tics.2019.02.008. Epub 2019 Apr 5.
6
Toward understanding the impact of artificial intelligence on labor.迈向理解人工智能对劳动力的影响。
Proc Natl Acad Sci U S A. 2019 Apr 2;116(14):6531-6539. doi: 10.1073/pnas.1900949116. Epub 2019 Mar 25.
7
Artificial intelligence to support human instruction.人工智能辅助人工指导。
Proc Natl Acad Sci U S A. 2019 Mar 5;116(10):3953-3955. doi: 10.1073/pnas.1900370116. Epub 2019 Feb 19.
8
People are averse to machines making moral decisions.人们反对机器做出道德决策。
Cognition. 2018 Dec;181:21-34. doi: 10.1016/j.cognition.2018.08.003. Epub 2018 Aug 11.
9
AI can be sexist and racist - it's time to make it fair.人工智能可能存在性别歧视和种族歧视——是时候让它变得公平了。
Nature. 2018 Jul;559(7714):324-326. doi: 10.1038/d41586-018-05707-8.
10
The Artificial Moral Advisor. The "Ideal Observer" Meets Artificial Intelligence.人工道德顾问。“理想观察者”与人工智能相遇。
Philos Technol. 2018;31(2):169-188. doi: 10.1007/s13347-017-0285-z. Epub 2017 Dec 8.