文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review.

作者信息

Sasseville Maxime, Ouellet Steven, Rhéaume Caroline, Sahlia Malek, Couture Vincent, Després Philippe, Paquette Jean-Sébastien, Darmon David, Bergeron Frédéric, Gagnon Marie-Pierre

机构信息

Faculté des sciences infirmières, Université Laval, Québec, QC, Canada.

Vitam Research Center on Sustainable Health, Québec, QC, Canada.

出版信息

J Med Internet Res. 2025 Jan 7;27:e60269. doi: 10.2196/60269.


DOI:10.2196/60269
PMID:39773888
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11751650/
Abstract

BACKGROUND: Artificial intelligence (AI) predictive models in primary health care have the potential to enhance population health by rapidly and accurately identifying individuals who should receive care and health services. However, these models also carry the risk of perpetuating or amplifying existing biases toward diverse groups. We identified a gap in the current understanding of strategies used to assess and mitigate bias in primary health care algorithms related to individuals' personal or protected attributes. OBJECTIVE: This study aimed to describe the attempts, strategies, and methods used to mitigate bias in AI models within primary health care, to identify the diverse groups or protected attributes considered, and to evaluate the results of these approaches on both bias reduction and AI model performance. METHODS: We conducted a scoping review following Joanna Briggs Institute (JBI) guidelines, searching Medline (Ovid), CINAHL (EBSCO), PsycINFO (Ovid), and Web of Science databases for studies published between January 1, 2017, and November 15, 2022. Pairs of reviewers independently screened titles and abstracts, applied selection criteria, and performed full-text screening. Discrepancies regarding study inclusion were resolved by consensus. Following reporting standards for AI in health care, we extracted data on study objectives, model features, targeted diverse groups, mitigation strategies used, and results. Using the mixed methods appraisal tool, we appraised the quality of the studies. RESULTS: After removing 585 duplicates, we screened 1018 titles and abstracts. From the remaining 189 full-text articles, we included 17 studies. The most frequently investigated protected attributes were race (or ethnicity), examined in 12 of the 17 studies, and sex (often identified as gender), typically classified as "male versus female" in 10 of the studies. We categorized bias mitigation approaches into four clusters: (1) modifying existing AI models or datasets, (2) sourcing data from electronic health records, (3) developing tools with a "human-in-the-loop" approach, and (4) identifying ethical principles for informed decision-making. Algorithmic preprocessing methods, such as relabeling and reweighing data, along with natural language processing techniques that extract data from unstructured notes, showed the greatest potential for bias mitigation. Other methods aimed at enhancing model fairness included group recalibration and the application of the equalized odds metric. However, these approaches sometimes exacerbated prediction errors across groups or led to overall model miscalibrations. CONCLUSIONS: The results suggest that biases toward diverse groups are more easily mitigated when data are open-sourced, multiple stakeholders are engaged, and during the algorithm's preprocessing stage. Further empirical studies that include a broader range of groups, such as Indigenous peoples in Canada, are needed to validate and expand upon these findings. TRIAL REGISTRATION: OSF Registry osf.io/9ngz5/; https://osf.io/9ngz5/. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/46684.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/40c6/11751650/b1ead91b8f00/jmir_v27i1e60269_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/40c6/11751650/b1ead91b8f00/jmir_v27i1e60269_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/40c6/11751650/b1ead91b8f00/jmir_v27i1e60269_fig1.jpg

相似文献

[1]
Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review.

J Med Internet Res. 2025-1-7

[2]
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.

Cochrane Database Syst Rev. 2022-2-1

[3]
Beyond the black stump: rapid reviews of health research issues affecting regional, rural and remote Australia.

Med J Aust. 2020-12

[4]
Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review.

JMIR Res Protoc. 2023-6-26

[5]
Application of Artificial Intelligence in Community-Based Primary Health Care: Systematic Scoping Review and Critical Appraisal.

J Med Internet Res. 2021-9-3

[6]
Health Equity in Artificial Intelligence and Primary Care Research: Protocol for a Scoping Review.

JMIR Res Protoc. 2021-9-17

[7]
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.

J Am Med Inform Assoc. 2024-4-19

[8]
Stakeholder Perspectives of Clinical Artificial Intelligence Implementation: Systematic Review of Qualitative Evidence.

J Med Internet Res. 2023-1-10

[9]
Ageism and Artificial Intelligence: Protocol for a Scoping Review.

JMIR Res Protoc. 2022-6-9

[10]
Exploring Curriculum Considerations to Prepare Future Radiographers for an AI-Assisted Health Care Environment: Protocol for Scoping Review.

JMIR Res Protoc. 2025-3-6

引用本文的文献

[1]
Biases in AI: acknowledging and addressing the inevitable ethical issues.

Front Digit Health. 2025-8-20

[2]
Enhancing skin lesion classification: a CNN approach with human baseline comparison.

PeerJ Comput Sci. 2025-4-15

[3]
Opportunities, challenges, and requirements for Artificial Intelligence (AI) implementation in Primary Health Care (PHC): a systematic review.

BMC Prim Care. 2025-6-9

本文引用的文献

[1]
Integration of cognitive tasks into artificial general intelligence test for large models.

iScience. 2024-3-22

[2]
Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review.

JMIR Res Protoc. 2023-6-26

[3]
Bias in artificial intelligence algorithms and recommendations for mitigation.

PLOS Digit Health. 2023-6-22

[4]
Artificial intelligence and health inequities in primary care: a systematic scoping review and framework.

Fam Med Community Health. 2022-11

[5]
Equity within AI systems: What can health leaders expect?

Healthc Manage Forum. 2023-3

[6]
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias.

IEEE Trans Vis Comput Graph. 2023-1

[7]
Recommendations for the extraction, analysis, and presentation of results in scoping reviews.

JBI Evid Synth. 2023-3-1

[8]
Age, sex and race bias in automated arrhythmia detectors.

J Electrocardiol. 2022

[9]
Bias in algorithms of AI systems developed for COVID-19: A scoping review.

J Bioeth Inq. 2022-9

[10]
Fairness in Mobile Phone-Based Mental Health Assessment Algorithms: Exploratory Study.

JMIR Form Res. 2022-6-14

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索