Hilling Denise E, Ihaddouchen Imane, Buijsman Stefan, Townsend Reggie, Gommers Diederik, van Genderen Michel E
Department of Gastrointestinal Surgery and Surgical Oncology, Erasmus MC Cancer Institute, University Medical Center, Rotterdam, Netherlands.
Erasmus MC Datahub, University Medical Center, Rotterdam, Netherlands.
Front Artif Intell. 2025 Apr 16;8:1577529. doi: 10.3389/frai.2025.1577529. eCollection 2025.
Artificial Intelligence (AI) in healthcare holds transformative potential but faces critical challenges in ethical accountability and systemic inequities. Biases in AI models, such as lower diagnosis rates for Black women or gender stereotyping in Large Language Models, highlight the urgent need to address historical and structural inequalities in data and development processes. Disparities in clinical trials and datasets, often skewed toward high-income, English-speaking regions, amplify these issues. Moreover, the underrepresentation of marginalized groups among AI developers and researchers exacerbates these challenges. To ensure equitable AI, diverse data collection, federated data-sharing frameworks, and bias-correction techniques are essential. Structural initiatives, such as fairness audits, transparent AI model development processes, and early registration of clinical AI models, alongside inclusive global collaborations like TRAIN-Europe and CHAI, can drive responsible AI adoption. Prioritizing diversity in datasets and among developers and researchers, as well as implementing transparent governance will foster AI systems that uphold ethical principles and deliver equitable healthcare outcomes globally.
医疗保健领域的人工智能具有变革潜力,但在道德责任和系统性不平等方面面临严峻挑战。人工智能模型中的偏见,如黑人女性的诊断率较低或大语言模型中的性别刻板印象,凸显了迫切需要解决数据和开发过程中的历史和结构性不平等问题。临床试验和数据集的差异往往偏向高收入、讲英语的地区,这加剧了这些问题。此外,边缘化群体在人工智能开发者和研究人员中的代表性不足也加剧了这些挑战。为确保人工智能的公平性,多样化的数据收集、联邦数据共享框架和偏差校正技术至关重要。结构性举措,如公平性审计、透明的人工智能模型开发过程以及临床人工智能模型的早期注册,以及像欧洲培训与研究网络(TRAIN-Europe)和临床人工智能倡议(CHAI)这样的包容性全球合作,可以推动负责任地采用人工智能。优先考虑数据集中以及开发者和研究人员之间的多样性,并实施透明治理,将培育出秉持道德原则并在全球提供公平医疗保健成果的人工智能系统。