算法鸿沟:关于人工智能驱动的医疗保健领域种族差异的系统综述
The Algorithmic Divide: A Systematic Review on AI-Driven Racial Disparities in Healthcare.
作者信息
Haider Syed Ali, Borna Sahar, Gomez-Cabello Cesar A, Pressman Sophia M, Haider Clifton R, Forte Antonio Jorge
机构信息
Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd, Jacksonville, FL, 32224, USA.
Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA.
出版信息
J Racial Ethn Health Disparities. 2024 Dec 18. doi: 10.1007/s40615-024-02237-0.
INTRODUCTION
As artificial intelligence (AI) continues to permeate various sectors, concerns about disparities arising from its deployment have surfaced. AI's effectiveness correlates not only with the algorithm's quality but also with its training data's integrity. This systematic review investigates the racial disparities perpetuated by AI systems across diverse medical domains and the implications of deploying them, particularly in healthcare.
METHODS
Six electronic databases (PubMed, Scopus, IEEE, Google Scholar, EMBASE, and Cochrane) were systematically searched on October 3, 2023. Inclusion criteria were peer-reviewed articles in English from 2013 to 2023 that examined instances of racial bias perpetuated by AI in healthcare. Studies conducted outside of healthcare settings or that addressed biases other than racial, as well as letters, opinions were excluded. The risk of bias was identified using CASP criteria for reviews and the Modified Newcastle Scale for observational studies.
RESULTS
Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 1272 articles were initially identified, from which 26 met eligibility criteria. Four articles were identified via snowballing, resulting in 30 articles in the analysis. Studies indicate a significant association between AI utilization and the exacerbation of racial disparities, especially in minority populations, including Blacks and Hispanics. Biased data, algorithm design, unfair deployment of algorithms, and historic/systemic inequities were identified as the causes. Study limitations stem from heterogeneity impeding broad comparisons and the preclusion of meta-analysis.
CONCLUSION
To address racial disparities in healthcare outcomes, enhanced ethical considerations and regulatory frameworks are needed in AI healthcare applications. Comprehensive bias detection tools and mitigation strategies, coupled with active supervision by physicians, are essential to ensure AI becomes a tool for reducing racial disparities in healthcare outcomes.
引言
随着人工智能(AI)不断渗透到各个领域,人们对其应用所产生的差异问题日益关注。人工智能的有效性不仅与算法质量相关,还与其训练数据的完整性有关。本系统综述调查了人工智能系统在不同医学领域造成的种族差异及其应用的影响,特别是在医疗保健领域。
方法
于2023年10月3日对六个电子数据库(PubMed、Scopus、IEEE、谷歌学术、EMBASE和Cochrane)进行了系统检索。纳入标准为2013年至2023年期间发表的英文同行评审文章,这些文章研究了人工智能在医疗保健中延续种族偏见的实例。排除在医疗保健环境之外进行的研究、涉及种族以外其他偏见的研究以及信件、观点类文章。使用CASP综述标准和观察性研究的改良纽卡斯尔量表来识别偏倚风险。
结果
按照系统评价和Meta分析的首选报告项目(PRISMA)指南,初步识别出1272篇文章,其中26篇符合纳入标准。通过滚雪球法又识别出4篇文章,最终纳入分析的文章有30篇。研究表明,人工智能的应用与种族差异的加剧之间存在显著关联,尤其是在包括黑人和西班牙裔在内的少数族裔群体中。有偏见的数据、算法设计、算法的不公平应用以及历史/系统性不平等被确定为造成这种情况的原因。研究的局限性在于异质性阻碍了广泛的比较,且无法进行Meta分析。
结论
为了解决医疗保健结果中的种族差异问题,人工智能在医疗保健应用中需要加强伦理考量和监管框架。全面的偏见检测工具和缓解策略,再加上医生的积极监督,对于确保人工智能成为减少医疗保健结果中种族差异的工具至关重要。