• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能中的算法偏见是一个问题——而根本问题是权力。

Algorithmic bias in artificial intelligence is a problem-And the root issue is power.

机构信息

Elaine Marieb College of Nursing, University of Massachusetts Amherst, Amherst, MA.

出版信息

Nurs Outlook. 2023 Sep-Oct;71(5):102023. doi: 10.1016/j.outlook.2023.102023. Epub 2023 Aug 13.

DOI:10.1016/j.outlook.2023.102023
PMID:37579574
Abstract

BACKGROUND

Artificial intelligence (AI) in health care continues to expand at a rapid rate, impacting both nurses and communities we accompany in care.

PURPOSE

We argue algorithmic bias is but a symptom of a more systemic and longstanding problem: power imbalances related to the creation, development, and use of health care technologies.

METHODS

This commentary responds to Drs. O'Connor and Booth's 2022 article, "Algorithmic bias in health care: Opportunities for nurses to improve equality in the age of artificial intelligence."

DISCUSSION

Nurses need not 'reinvent the wheel' when it comes to AI policy, curricula, or ethics. We can and should follow the lead of communities already working 'from the margins' who provide ample guidance.

CONCLUSION

Its neither feasible nor just to expect individual nurses to counter systemic injustice in health care through individual actions, more technocentric curricula, or industry partnerships. We need disciplinary supports for collective action to renegotiate power for AI tech.

摘要

背景

人工智能(AI)在医疗保健领域的应用正在迅速扩展,这不仅影响到护士,也影响到我们所照顾的社区。

目的

我们认为算法偏差只是一个更系统和长期存在的问题的症状:与医疗技术的创建、开发和使用相关的权力失衡。

方法

本评论回应了 O'Connor 和 Booth 博士 2022 年的文章“医疗保健中的算法偏差:在人工智能时代,护士有机会实现平等”。

讨论

在 AI 政策、课程或道德方面,护士不必“重新发明轮子”。我们可以并且应该遵循已经在“边缘”工作的社区的领导,他们提供了充分的指导。

结论

期望个别护士通过个人行动、更多以技术为中心的课程或行业伙伴关系来纠正医疗保健中的系统性不公正,既不可行也不公平。我们需要纪律性的支持,以集体行动来重新协商人工智能技术的权力。

相似文献

1
Algorithmic bias in artificial intelligence is a problem-And the root issue is power.人工智能中的算法偏见是一个问题——而根本问题是权力。
Nurs Outlook. 2023 Sep-Oct;71(5):102023. doi: 10.1016/j.outlook.2023.102023. Epub 2023 Aug 13.
2
Algorithmic bias in health care: Opportunities for nurses to improve equality in the age of artificial intelligence.医疗保健中的算法偏见:护士在人工智能时代促进平等的机遇。
Nurs Outlook. 2022 Nov-Dec;70(6):780-782. doi: 10.1016/j.outlook.2022.09.003. Epub 2022 Nov 14.
3
Advancing health equity with artificial intelligence.利用人工智能推进健康公平。
J Public Health Policy. 2021 Dec;42(4):602-611. doi: 10.1057/s41271-021-00319-5. Epub 2021 Nov 22.
4
Digital Ageism: Challenges and Opportunities in Artificial Intelligence for Older Adults.数字年龄歧视:人工智能在老年人中的挑战与机遇。
Gerontologist. 2022 Aug 12;62(7):947-955. doi: 10.1093/geront/gnab167.
5
The selective deployment of AI in healthcare: An ethical algorithm for algorithms.人工智能在医疗保健中的选择性部署:算法的伦理算法。
Bioethics. 2024 Jun;38(5):391-400. doi: 10.1111/bioe.13281. Epub 2024 Mar 30.
6
Mitigating the risk of artificial intelligence bias in cardiovascular care.降低心血管护理中人工智能偏差的风险。
Lancet Digit Health. 2024 Oct;6(10):e749-e754. doi: 10.1016/S2589-7500(24)00155-9. Epub 2024 Aug 29.
7
Artificial intelligence in nursing and midwifery: A systematic review.人工智能在护理和助产学中的应用:系统评价。
J Clin Nurs. 2023 Jul;32(13-14):2951-2968. doi: 10.1111/jocn.16478. Epub 2022 Jul 31.
8
Artificial intelligence in nursing: Priorities and opportunities from an international invitational think-tank of the Nursing and Artificial Intelligence Leadership Collaborative.人工智能在护理中的应用:护理与人工智能领导力协作国际特邀智囊团的优先事项和机遇。
J Adv Nurs. 2021 Sep;77(9):3707-3717. doi: 10.1111/jan.14855. Epub 2021 May 18.
9
Ethics and governance of trustworthy medical artificial intelligence.可信医疗人工智能的伦理与治理。
BMC Med Inform Decis Mak. 2023 Jan 13;23(1):7. doi: 10.1186/s12911-023-02103-9.
10
Mitigating Racial Bias in Machine Learning.减轻机器学习中的种族偏见。
J Law Med Ethics. 2022;50(1):92-100. doi: 10.1017/jme.2022.13.

引用本文的文献

1
Explanation and elaboration of MedinAI: guidelines for reporting artificial intelligence studies in medicines, pharmacotherapy, and pharmaceutical services.MedinAI的解释与阐述:药物、药物治疗及药学服务中人工智能研究的报告指南
Int J Clin Pharm. 2025 Apr 18. doi: 10.1007/s11096-025-01906-2.
2
Application of Predictive Analytics in Pregnancy, Birth, and Postpartum Nursing Care.预测分析在妊娠、分娩及产后护理中的应用。
MCN Am J Matern Child Nurs. 2025;50(2):66-77. doi: 10.1097/NMC.0000000000001082. Epub 2025 Feb 25.
3
Beautiful Bias from ChatGPT.
来自ChatGPT的美妙偏差。
J Clin Aesthet Dermatol. 2024 Jun;17(6):10.
4
The Cooperation Between Nurses and a New Digital Colleague "AI-Driven Lifestyle Monitoring" in Long-Term Care for Older Adults: Viewpoint.护士与新数字同事"AI 驱动的生活方式监测"在老年人长期护理中的合作:观点。
JMIR Nurs. 2024 May 23;7:e56474. doi: 10.2196/56474.
5
Beyond Discrimination: Generative AI Applications and Ethical Challenges in Forensic Psychiatry.超越歧视:生成式人工智能在法医精神病学中的应用及伦理挑战
Front Psychiatry. 2024 Mar 8;15:1346059. doi: 10.3389/fpsyt.2024.1346059. eCollection 2024.