• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

比较针对人工智能和人类的歧视行为。

Comparing discriminatory behavior against AI and humans.

作者信息

Zhuang Mike, Deschrijver Eliane, Ramsey Richard, Turel Ofir

机构信息

School of Computing and Information Systems, The University of Melbourne, Parkville, VIC, 3052, Australia.

School of Psychology, The University of Sydney, A18 Manning Rd, Camperdown, NSW, 2050, Australia.

出版信息

Sci Rep. 2025 Mar 29;15(1):10894. doi: 10.1038/s41598-025-94631-9.

DOI:10.1038/s41598-025-94631-9
PMID:40157964
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11954899/
Abstract

Although discrimination is typically believed to occur from well-defined categories like ethnicity, disability, and sex, studies have found that discrimination persists in minimal conditions lacking such categories. Participants have been found to preferentially allocate resources based on seemingly arbitrary shared characteristics such as dot estimation choices. Here, we use a preregistered experiment (n = 500) to investigate whether humans discriminate in a similar manner when interacting with artificial intelligence (AI) agents that ostensibly made dot estimations. We hypothesized that because humans harbor prejudice against algorithms relative to other humans (otherwise known as algorithm aversion), the strength of discriminatory behavior may be greater against AI than humans. Surprisingly, we found that participants distributed resources in a similar manner, albeit unequally, to both human and AI agents. Specifically, participants favored the other agent when decisions were aligned. Our findings suggest that discriminatory behavior is less influenced by the recipient's identity and more shaped by choice congruency.

摘要

尽管人们通常认为歧视源于种族、残疾和性别等明确的类别,但研究发现,在缺乏这些类别的极少情况下,歧视依然存在。研究发现,参与者会根据诸如点估计选择等看似随意的共同特征来优先分配资源。在此,我们进行了一项预先注册的实验(n = 500),以调查当人类与表面上进行点估计的人工智能(AI)代理交互时,是否会以类似的方式进行歧视。我们假设,由于相对于其他人,人类对算法怀有偏见(即所谓的算法厌恶),因此针对AI的歧视行为强度可能比对人类的更大。令人惊讶的是,我们发现参与者对人类和AI代理的资源分配方式相似,尽管并不平等。具体而言,当决策一致时,参与者更青睐另一方。我们的研究结果表明,歧视行为受接受者身份的影响较小,而更多地由选择一致性塑造。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e46/11954899/8ef52d9560a8/41598_2025_94631_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e46/11954899/62833949b4b0/41598_2025_94631_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e46/11954899/b6fc382b4c40/41598_2025_94631_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e46/11954899/8ef52d9560a8/41598_2025_94631_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e46/11954899/62833949b4b0/41598_2025_94631_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e46/11954899/b6fc382b4c40/41598_2025_94631_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e46/11954899/8ef52d9560a8/41598_2025_94631_Fig3_HTML.jpg

相似文献

1
Comparing discriminatory behavior against AI and humans.比较针对人工智能和人类的歧视行为。
Sci Rep. 2025 Mar 29;15(1):10894. doi: 10.1038/s41598-025-94631-9.
2
Investigating Whether AI Will Replace Human Physicians and Understanding the Interplay of the Source of Consultation, Health-Related Stigma, and Explanations of Diagnoses on Patients' Evaluations of Medical Consultations: Randomized Factorial Experiment.探讨人工智能是否会取代人类医生,并了解会诊来源、健康相关耻辱感以及诊断解释对患者医疗会诊评估的相互作用:随机析因实验。
J Med Internet Res. 2025 Mar 5;27:e66760. doi: 10.2196/66760.
3
Latent bias and the implementation of artificial intelligence in medicine.医学人工智能应用中的潜在偏见
J Am Med Inform Assoc. 2020 Dec 9;27(12):2020-2023. doi: 10.1093/jamia/ocaa094.
4
Self-reported discrimination and discriminatory behaviour: the role of attachment security.自我报告的歧视和歧视行为:依恋安全性的作用。
Br J Soc Psychol. 2012 Jun;51(2):393-403. doi: 10.1111/j.2044-8309.2011.02065.x. Epub 2011 Sep 23.
5
AI-induced indifference: Unfair AI reduces prosociality.人工智能引发的冷漠:不公平的人工智能降低亲社会行为。
Cognition. 2025 Jan;254:105937. doi: 10.1016/j.cognition.2024.105937. Epub 2024 Sep 23.
6
Young Adult Perspectives on Artificial Intelligence-Based Medication Counseling in China: Discrete Choice Experiment.中国年轻人对基于人工智能的药物咨询的看法:离散选择实验
J Med Internet Res. 2025 Apr 9;27:e67744. doi: 10.2196/67744.
7
Impact of artificial intelligence on pathologists' decisions: an experiment.人工智能对病理学家决策的影响:一项实验。
J Am Med Inform Assoc. 2022 Sep 12;29(10):1688-1695. doi: 10.1093/jamia/ocac103.
8
Artificial Intelligence Bias in Health Care: Web-Based Survey.人工智能在医疗保健中的偏见:基于网络的调查。
J Med Internet Res. 2023 Jun 22;25:e41089. doi: 10.2196/41089.
9
Predicting personality or prejudice? Facial inference in the age of artificial intelligence.预测个性还是偏见?人工智能时代的面部推断。
Curr Opin Psychol. 2024 Aug;58:101815. doi: 10.1016/j.copsyc.2024.101815. Epub 2024 Jun 21.
10
Artificial intelligence for detecting keratoconus.人工智能在圆锥角膜检测中的应用。
Cochrane Database Syst Rev. 2023 Nov 15;11(11):CD014911. doi: 10.1002/14651858.CD014911.pub2.

本文引用的文献

1
Unequal resource division occurs in the absence of group division and identity.在没有群体划分和身份认同的情况下会出现资源分配不均的情况。
Proc Natl Acad Sci U S A. 2025 Feb 18;122(7):e2413797122. doi: 10.1073/pnas.2413797122. Epub 2025 Feb 12.
2
The CASA theory no longer applies to desktop computers.CASA理论不再适用于台式计算机。
Sci Rep. 2023 Nov 11;13(1):19693. doi: 10.1038/s41598-023-46527-9.
3
The computer, A choreographer? Aesthetic responses to randomly-generated dance choreography by a computer.计算机,舞蹈编排者?对计算机随机生成的舞蹈编排的审美反应。
Heliyon. 2022 Dec 30;9(1):e12750. doi: 10.1016/j.heliyon.2022.e12750. eCollection 2023 Jan.
4
Does AI Debias Recruitment? Race, Gender, and AI's "Eradication of Difference".人工智能能否消除招聘中的偏见?种族、性别与人工智能的“差异消除”
Philos Technol. 2022;35(4):89. doi: 10.1007/s13347-022-00543-1. Epub 2022 Oct 10.
5
Random effects structure for confirmatory hypothesis testing: Keep it maximal.用于验证性假设检验的随机效应结构:保持其最大化。
J Mem Lang. 2013 Apr;68(3). doi: 10.1016/j.jml.2012.11.001.
6
Why do patients derogate physicians who use a computer-based diagnostic support system?为什么患者会贬低使用基于计算机的诊断支持系统的医生?
Med Decis Making. 2013 Jan;33(1):108-18. doi: 10.1177/0272989X12453501. Epub 2012 Jul 20.
7
Experiments in intergroup discrimination.群体间歧视实验。
Sci Am. 1970 Nov;223(5):96-102.