• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Human bias in algorithm design.

作者信息

Morewedge Carey K, Mullainathan Sendhil, Naushan Haaya F, Sunstein Cass R, Kleinberg Jon, Raghavan Manish, Ludwig Jens O

机构信息

Questrom School of Business, Boston University, Boston, MA, USA.

Chicago Booth School of Business, University of Chicago, Chicago, IL, USA.

出版信息

Nat Hum Behav. 2023 Nov;7(11):1822-1824. doi: 10.1038/s41562-023-01724-4.

DOI:10.1038/s41562-023-01724-4
PMID:37985907
Abstract
摘要

相似文献

1
Human bias in algorithm design.算法设计中的人为偏见。
Nat Hum Behav. 2023 Nov;7(11):1822-1824. doi: 10.1038/s41562-023-01724-4.
2
Artificial intelligence for diagnosing exudative age-related macular degeneration.人工智能在渗出性年龄相关性黄斑变性诊断中的应用。
Cochrane Database Syst Rev. 2024 Oct 17;10(10):CD015522. doi: 10.1002/14651858.CD015522.pub2.
3
Evolution and impact of bias in human and machine learning algorithm interaction.人类与机器学习算法交互中的偏差演变与影响。
PLoS One. 2020 Aug 13;15(8):e0235502. doi: 10.1371/journal.pone.0235502. eCollection 2020.
4
Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care.解决算法偏差对健康和医疗保健中种族和民族差异影响的指导原则。
JAMA Netw Open. 2023 Dec 1;6(12):e2345050. doi: 10.1001/jamanetworkopen.2023.45050.
5
People see more of their biases in algorithms.人们在算法中看到了更多的偏见。
Proc Natl Acad Sci U S A. 2024 Apr 16;121(16):e2317602121. doi: 10.1073/pnas.2317602121. Epub 2024 Apr 10.
6
Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias With Use of Health Care Algorithms.种族和民族偏见意识,以及使用医疗保健算法解决偏见的潜在方法。
JAMA Health Forum. 2023 Jun 2;4(6):e231197. doi: 10.1001/jamahealthforum.2023.1197.
7
8
Accuracy comparison across face recognition algorithms: Where are we on measuring race bias?不同人脸识别算法的准确性比较:在衡量种族偏见方面我们处于什么位置?
IEEE Trans Biom Behav Identity Sci. 2021 Jan;3(1):101-111. doi: 10.1109/TBIOM.2020.3027269. Epub 2020 Sep 29.
9
Mitigating the risk of artificial intelligence bias in cardiovascular care.降低心血管护理中人工智能偏差的风险。
Lancet Digit Health. 2024 Oct;6(10):e749-e754. doi: 10.1016/S2589-7500(24)00155-9. Epub 2024 Aug 29.
10
Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making.人脸识别中的人机协作:算法结果如何认知地影响人类决策。
PLoS One. 2020 Aug 21;15(8):e0237855. doi: 10.1371/journal.pone.0237855. eCollection 2020.

引用本文的文献

1
Engagement, user satisfaction, and the amplification of divisive content on social media.社交媒体上的参与度、用户满意度以及分裂性内容的扩散。
PNAS Nexus. 2025 Mar 5;4(3):pgaf062. doi: 10.1093/pnasnexus/pgaf062. eCollection 2025 Mar.
2
High risk of political bias in black box emotion inference models.黑箱情感推理模型存在政治偏见的高风险。
Sci Rep. 2025 Feb 19;15(1):6028. doi: 10.1038/s41598-025-86766-6.
3
How human-AI feedback loops alter human perceptual, emotional and social judgements.人类与人工智能的反馈循环如何改变人类的感知、情感和社会判断。

本文引用的文献

1
Dissecting racial bias in an algorithm used to manage the health of populations.剖析用于管理人群健康的算法中的种族偏见。
Science. 2019 Oct 25;366(6464):447-453. doi: 10.1126/science.aax2342.
2
How Are Preferences Revealed?偏好是如何被揭示的?
J Public Econ. 2008 Aug;92(8-9):1787-1794. doi: 10.1016/j.jpubeco.2008.04.010.
3
Associative processes in intuitive judgment.直觉判断中的联想过程。
Nat Hum Behav. 2025 Feb;9(2):345-359. doi: 10.1038/s41562-024-02077-2. Epub 2024 Dec 18.
4
Social connection as a critical factor for mental and physical health: evidence, trends, challenges, and future implications.社会联系作为身心健康的关键因素:证据、趋势、挑战及未来影响
World Psychiatry. 2024 Oct;23(3):312-332. doi: 10.1002/wps.21224.
5
The consequences of AI training on human decision-making.人工智能训练对人类决策的影响。
Proc Natl Acad Sci U S A. 2024 Aug 13;121(33):e2408731121. doi: 10.1073/pnas.2408731121. Epub 2024 Aug 6.
6
People see more of their biases in algorithms.人们在算法中看到了更多的偏见。
Proc Natl Acad Sci U S A. 2024 Apr 16;121(16):e2317602121. doi: 10.1073/pnas.2317602121. Epub 2024 Apr 10.
Trends Cogn Sci. 2010 Oct;14(10):435-40. doi: 10.1016/j.tics.2010.07.004. Epub 2010 Aug 7.