Suppr超能文献

初级卫生保健人工智能模型中的偏差缓解:范围综述

Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review.

作者信息

Sasseville Maxime, Ouellet Steven, Rhéaume Caroline, Sahlia Malek, Couture Vincent, Després Philippe, Paquette Jean-Sébastien, Darmon David, Bergeron Frédéric, Gagnon Marie-Pierre

机构信息

Faculté des sciences infirmières, Université Laval, Québec, QC, Canada.

Vitam Research Center on Sustainable Health, Québec, QC, Canada.

出版信息

J Med Internet Res. 2025 Jan 7;27:e60269. doi: 10.2196/60269.

Abstract

BACKGROUND

Artificial intelligence (AI) predictive models in primary health care have the potential to enhance population health by rapidly and accurately identifying individuals who should receive care and health services. However, these models also carry the risk of perpetuating or amplifying existing biases toward diverse groups. We identified a gap in the current understanding of strategies used to assess and mitigate bias in primary health care algorithms related to individuals' personal or protected attributes.

OBJECTIVE

This study aimed to describe the attempts, strategies, and methods used to mitigate bias in AI models within primary health care, to identify the diverse groups or protected attributes considered, and to evaluate the results of these approaches on both bias reduction and AI model performance.

METHODS

We conducted a scoping review following Joanna Briggs Institute (JBI) guidelines, searching Medline (Ovid), CINAHL (EBSCO), PsycINFO (Ovid), and Web of Science databases for studies published between January 1, 2017, and November 15, 2022. Pairs of reviewers independently screened titles and abstracts, applied selection criteria, and performed full-text screening. Discrepancies regarding study inclusion were resolved by consensus. Following reporting standards for AI in health care, we extracted data on study objectives, model features, targeted diverse groups, mitigation strategies used, and results. Using the mixed methods appraisal tool, we appraised the quality of the studies.

RESULTS

After removing 585 duplicates, we screened 1018 titles and abstracts. From the remaining 189 full-text articles, we included 17 studies. The most frequently investigated protected attributes were race (or ethnicity), examined in 12 of the 17 studies, and sex (often identified as gender), typically classified as "male versus female" in 10 of the studies. We categorized bias mitigation approaches into four clusters: (1) modifying existing AI models or datasets, (2) sourcing data from electronic health records, (3) developing tools with a "human-in-the-loop" approach, and (4) identifying ethical principles for informed decision-making. Algorithmic preprocessing methods, such as relabeling and reweighing data, along with natural language processing techniques that extract data from unstructured notes, showed the greatest potential for bias mitigation. Other methods aimed at enhancing model fairness included group recalibration and the application of the equalized odds metric. However, these approaches sometimes exacerbated prediction errors across groups or led to overall model miscalibrations.

CONCLUSIONS

The results suggest that biases toward diverse groups are more easily mitigated when data are open-sourced, multiple stakeholders are engaged, and during the algorithm's preprocessing stage. Further empirical studies that include a broader range of groups, such as Indigenous peoples in Canada, are needed to validate and expand upon these findings.

TRIAL REGISTRATION

OSF Registry osf.io/9ngz5/; https://osf.io/9ngz5/.

INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/46684.

摘要

背景

初级卫生保健中的人工智能(AI)预测模型有潜力通过快速准确地识别应接受护理和健康服务的个体来改善人群健康。然而,这些模型也存在延续或放大对不同群体现有偏见的风险。我们发现,目前对于用于评估和减轻与个人或受保护属性相关的初级卫生保健算法中偏见的策略的理解存在差距。

目的

本研究旨在描述用于减轻初级卫生保健中AI模型偏见的尝试、策略和方法,识别所考虑的不同群体或受保护属性,并评估这些方法在减少偏见和AI模型性能方面的结果。

方法

我们按照乔安娜·布里格斯研究所(JBI)的指南进行了一项范围综述,在Medline(Ovid)、CINAHL(EBSCO)、PsycINFO(Ovid)和科学网数据库中搜索2017年1月1日至2022年11月15日期间发表的研究。由两名审稿人独立筛选标题和摘要,应用选择标准,并进行全文筛选。关于研究纳入的分歧通过协商解决。按照医疗保健中AI的报告标准,我们提取了关于研究目的、模型特征、目标不同群体、使用的减轻偏见策略和结果的数据。使用混合方法评估工具,我们评估了研究的质量。

结果

在去除585篇重复文献后,我们筛选了1018篇标题和摘要。从其余189篇全文文章中,我们纳入了17项研究。最常被研究的受保护属性是种族(或民族),17项研究中有12项进行了考察,以及性别(通常被确定为“男性与女性”),10项研究中通常如此分类。我们将偏见减轻方法分为四类:(1)修改现有的AI模型或数据集,(2)从电子健康记录中获取数据,(3)采用“人在回路中”的方法开发工具,(4)确定知情决策的伦理原则。算法预处理方法,如重新标记和重新加权数据,以及从非结构化笔记中提取数据的自然语言处理技术,显示出最大的减轻偏见潜力。其他旨在提高模型公平性的方法包括群体重新校准和应用均等机会度量。然而,这些方法有时会加剧不同群体间的预测误差,或导致整体模型校准错误。

结论

结果表明,当数据开源、有多个利益相关者参与且在算法预处理阶段时,对不同群体的偏见更容易减轻。需要进一步的实证研究,包括纳入更广泛的群体,如加拿大的原住民,以验证和扩展这些发现。

试验注册

OSF注册库osf.io/9ngz5/;https://osf.io/9ngz5/。

国际注册报告识别号(IRRID):RR2-10.2196/46684。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/40c6/11751650/b1ead91b8f00/jmir_v27i1e60269_fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验