Suppr超能文献

人工智能中的算法偏见是一个问题——而根本问题是权力。

Algorithmic bias in artificial intelligence is a problem-And the root issue is power.

机构信息

Elaine Marieb College of Nursing, University of Massachusetts Amherst, Amherst, MA.

出版信息

Nurs Outlook. 2023 Sep-Oct;71(5):102023. doi: 10.1016/j.outlook.2023.102023. Epub 2023 Aug 13.

Abstract

BACKGROUND

Artificial intelligence (AI) in health care continues to expand at a rapid rate, impacting both nurses and communities we accompany in care.

PURPOSE

We argue algorithmic bias is but a symptom of a more systemic and longstanding problem: power imbalances related to the creation, development, and use of health care technologies.

METHODS

This commentary responds to Drs. O'Connor and Booth's 2022 article, "Algorithmic bias in health care: Opportunities for nurses to improve equality in the age of artificial intelligence."

DISCUSSION

Nurses need not 'reinvent the wheel' when it comes to AI policy, curricula, or ethics. We can and should follow the lead of communities already working 'from the margins' who provide ample guidance.

CONCLUSION

Its neither feasible nor just to expect individual nurses to counter systemic injustice in health care through individual actions, more technocentric curricula, or industry partnerships. We need disciplinary supports for collective action to renegotiate power for AI tech.

摘要

背景

人工智能(AI)在医疗保健领域的应用正在迅速扩展,这不仅影响到护士,也影响到我们所照顾的社区。

目的

我们认为算法偏差只是一个更系统和长期存在的问题的症状:与医疗技术的创建、开发和使用相关的权力失衡。

方法

本评论回应了 O'Connor 和 Booth 博士 2022 年的文章“医疗保健中的算法偏差:在人工智能时代,护士有机会实现平等”。

讨论

在 AI 政策、课程或道德方面,护士不必“重新发明轮子”。我们可以并且应该遵循已经在“边缘”工作的社区的领导,他们提供了充分的指导。

结论

期望个别护士通过个人行动、更多以技术为中心的课程或行业伙伴关系来纠正医疗保健中的系统性不公正,既不可行也不公平。我们需要纪律性的支持,以集体行动来重新协商人工智能技术的权力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验