• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

算法偏见:通过三维可靠人工智能框架实现社会科学研究整合

Algorithmic bias: Social science research integration through the 3-D Dependable AI Framework.

作者信息

Ukanwa Kalinda

机构信息

University of California, Irvine, United States.

出版信息

Curr Opin Psychol. 2024 Aug;58:101836. doi: 10.1016/j.copsyc.2024.101836. Epub 2024 Jul 1.

DOI:10.1016/j.copsyc.2024.101836
PMID:38981371
Abstract

Algorithmic bias has emerged as a critical challenge in the age of responsible production of artificial intelligence (AI). This paper reviews recent research on algorithmic bias and proposes increased engagement of psychological and social science research to understand antecedents and consequences of algorithmic bias. Through the lens of the 3-D Dependable AI Framework, this article explores how social science disciplines, such as psychology, can contribute to identifying and mitigating bias at the Design, Develop, and Deploy stages of the AI life cycle. Finally, we propose future research directions to further address the complexities of algorithmic bias and its societal implications.

摘要

算法偏见已成为人工智能(AI)责任生产时代的一项关键挑战。本文回顾了近期关于算法偏见的研究,并提议加强心理学和社会科学研究的参与,以了解算法偏见的成因和后果。通过三维可靠人工智能框架的视角,本文探讨了心理学等社会科学学科如何在人工智能生命周期的设计、开发和部署阶段,为识别和减轻偏见做出贡献。最后,我们提出了未来的研究方向,以进一步应对算法偏见的复杂性及其社会影响。

相似文献

1
Algorithmic bias: Social science research integration through the 3-D Dependable AI Framework.算法偏见:通过三维可靠人工智能框架实现社会科学研究整合
Curr Opin Psychol. 2024 Aug;58:101836. doi: 10.1016/j.copsyc.2024.101836. Epub 2024 Jul 1.
2
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.促进生物医学人工智能研究和临床应用公平性和包容性的建议。
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
3
Multidisciplinary considerations of fairness in medical AI: A scoping review.医疗人工智能公平性的多学科思考:范围综述。
Int J Med Inform. 2023 Oct;178:105175. doi: 10.1016/j.ijmedinf.2023.105175. Epub 2023 Aug 8.
4
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
5
Mitigating the risk of artificial intelligence bias in cardiovascular care.降低心血管护理中人工智能偏差的风险。
Lancet Digit Health. 2024 Oct;6(10):e749-e754. doi: 10.1016/S2589-7500(24)00155-9. Epub 2024 Aug 29.
6
Can Generative AI improve social science?生成式人工智能能改进社会科学吗?
Proc Natl Acad Sci U S A. 2024 May 21;121(21):e2314021121. doi: 10.1073/pnas.2314021121. Epub 2024 May 9.
7
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.揭开人工智能中的偏见:基于电子健康记录模型的偏见检测和缓解策略的系统评价。
J Am Med Inform Assoc. 2024 Apr 19;31(5):1172-1183. doi: 10.1093/jamia/ocae060.
8
Engineering Bias in AI.人工智能中的工程偏差
IEEE Pulse. 2019 Jan-Feb;10(1):15-17. doi: 10.1109/MPULS.2018.2885857.
9
A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health.呼吁重视并减轻人工智能应用于精神健康领域中的偏见
Perspect Psychol Sci. 2023 Sep;18(5):1062-1096. doi: 10.1177/17456916221134490. Epub 2022 Dec 9.
10
A roadmap to artificial intelligence (AI): Methods for designing and building AI ready data to promote fairness.人工智能(AI)路线图:设计和构建 AI 就绪数据的方法,以促进公平性。
J Biomed Inform. 2024 Jun;154:104654. doi: 10.1016/j.jbi.2024.104654. Epub 2024 May 11.

引用本文的文献

1
Ethical Artificial Intelligence in Nursing Workforce Management and Policymaking: Bridging Philosophy and Practice.护理劳动力管理与政策制定中的伦理人工智能:弥合哲学与实践的差距
J Nurs Manag. 2025 Apr 8;2025:7954013. doi: 10.1155/jonm/7954013. eCollection 2025.