Suppr超能文献

被误导的人工智能:临床模型中如何嵌入种族偏见。

Misguided Artificial Intelligence: How Racial Bias is Built Into Clinical Models.

作者信息

Jindal Atin

机构信息

Division of Hospital Medicine The Miriam Hospital, Lifespan Health System, Warren Alpert Brown School of Medicine, Providence, RI.

出版信息

Brown J Hosp Med. 2022 Sep 5;2(1):38021. doi: 10.56305/001c.38021. eCollection 2023.

Abstract

Artificial Intelligence is being used today to solve a myriad of problems. While there is significant promise that AI can help us address many healthcare issues, there is also concern that health inequities can be exacerbated. This article looks specifically at predictive models in regards to racial bias. Each phase of the model building process including raw data collection and processing, data labelling, and implementation of the model can be subject to racial bias. This article aims to explore some of the ways in which this occurs.

摘要

如今,人工智能正被用于解决诸多问题。虽然人工智能极有可能帮助我们解决许多医疗保健问题,但也有人担心健康不平等问题可能会加剧。本文专门探讨预测模型中的种族偏见问题。模型构建过程的每个阶段,包括原始数据收集与处理、数据标注以及模型实施,都可能存在种族偏见。本文旨在探讨这种情况出现的一些方式。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1050/11878858/f1260f7c0e97/bhm_2023_2_1_38021_98436.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验