Suppr超能文献

揭示人工智能中的偏见:基于电子健康记录模型的偏见检测与缓解策略的系统评价

Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.

作者信息

Chen Feng, Wang Liqin, Hong Julie, Jiang Jiaqi, Zhou Li

机构信息

Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States.

Department of Biomedical Informatics and Health Education, University of Washington, Seattle, WA 98105, United States.

出版信息

ArXiv. 2024 Jul 1:arXiv:2310.19917v3.

Abstract

OBJECTIVES

Leveraging artificial intelligence (AI) in conjunction with electronic health records (EHRs) holds transformative potential to improve healthcare. However, addressing bias in AI, which risks worsening healthcare disparities, cannot be overlooked. This study reviews methods to handle various biases in AI models developed using EHR data.

MATERIALS AND METHODS

We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, analyzing articles from PubMed, Web of Science, and IEEE published between January 01, 2010 and December 17, 2023. The review identified key biases, outlined strategies for detecting and mitigating bias throughout the AI model development, and analyzed metrics for bias assessment.

RESULTS

Of the 450 articles retrieved, 20 met our criteria, revealing 6 major bias types: algorithmic, confounding, implicit, measurement, selection, and temporal. The AI models were primarily developed for predictive tasks, yet none have been deployed in real-world healthcare settings. Five studies concentrated on the detection of implicit and algorithmic biases employing fairness metrics like statistical parity, equal opportunity, and predictive equity. Fifteen studies proposed strategies for mitigating biases, especially targeting implicit and selection biases. These strategies, evaluated through both performance and fairness metrics, predominantly involved data collection and preprocessing techniques like resampling and reweighting.

DISCUSSION

This review highlights evolving strategies to mitigate bias in EHR-based AI models, emphasizing the urgent need for both standardized and detailed reporting of the methodologies and systematic real-world testing and evaluation. Such measures are essential for gauging models' practical impact and fostering ethical AI that ensures fairness and equity in healthcare.

摘要

目的

将人工智能(AI)与电子健康记录(EHR)结合使用具有改善医疗保健的变革潜力。然而,解决人工智能中的偏差问题不容忽视,因为这可能会加剧医疗保健差距。本研究回顾了处理使用电子健康记录数据开发的人工智能模型中各种偏差的方法。

材料与方法

我们按照系统评价和Meta分析的首选报告项目指南进行了系统评价,分析了2010年1月1日至2023年12月17日期间在PubMed、科学网和电气与电子工程师协会上发表的文章。该评价确定了关键偏差,概述了在整个人工智能模型开发过程中检测和减轻偏差的策略,并分析了偏差评估指标。

结果

在检索到的450篇文章中,有20篇符合我们的标准,揭示了6种主要偏差类型:算法偏差、混杂偏差、隐性偏差、测量偏差、选择偏差和时间偏差。人工智能模型主要是为预测任务而开发的,但尚未在实际医疗环境中部署。五项研究集中于使用统计均等、平等机会和预测公平性等公平性指标来检测隐性偏差和算法偏差。十五项研究提出了减轻偏差的策略,特别是针对隐性偏差和选择偏差。这些通过性能和公平性指标评估的策略主要涉及重采样和重新加权等数据收集和预处理技术。

讨论

本综述强调了减轻基于电子健康记录的人工智能模型偏差的不断发展的策略,强调了对方法进行标准化和详细报告以及系统的实际测试和评估的迫切需求。这些措施对于衡量模型的实际影响和促进确保医疗保健公平性的道德人工智能至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a0c5/11247915/959f96402079/nihpp-2310.19917v3-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验