Liu Mingxuan, Ning Yilin, Teixayavong Salinelat, Liu Xiaoxuan, Mertens Mayli, Shang Yuqing, Li Xin, Miao Di, Liao Jingchi, Xu Jie, Ting Daniel Shu Wei, Cheng Lionel Tim-Ee, Ong Jasmine Chiat Ling, Teo Zhen Ling, Tan Ting Fang, RaviChandran Narrendar, Wang Fei, Celi Leo Anthony, Ong Marcus Eng Hock, Liu Nan
Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore.
College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK.
NPJ Digit Med. 2025 Jun 14;8(1):360. doi: 10.1038/s41746-025-01667-2.
The ethical integration of artificial intelligence (AI) in healthcare necessitates addressing fairness. AI fairness involves mitigating biases in AI and leveraging AI to promote equity. Despite advancements, significant disconnects persist between technical solutions and clinical applications. Through evidence gap analysis, this review systematically pinpoints the gaps at the intersection of healthcare contexts-including medical fields, healthcare datasets, and bias-relevant attributes (e.g., gender/sex)-and AI fairness techniques for bias detection, evaluation, and mitigation. We highlight the scarcity of AI fairness research in medical domains, the narrow focus on bias-relevant attributes, the dominance of group fairness centering on model performance equality, and the limited integration of clinician-in-the-loop to improve AI fairness. To bridge the gaps, we propose actionable strategies for future research to accelerate the development of AI fairness in healthcare, ultimately advancing equitable healthcare delivery.
人工智能(AI)在医疗保健领域的伦理整合需要解决公平性问题。人工智能公平性包括减轻人工智能中的偏见,并利用人工智能促进公平。尽管取得了进展,但技术解决方案与临床应用之间仍存在重大脱节。通过证据差距分析,本综述系统地指出了医疗保健背景(包括医学领域、医疗保健数据集和与偏见相关的属性,如性别)与用于偏见检测、评估和减轻的人工智能公平性技术交叉点上的差距。我们强调医学领域人工智能公平性研究的匮乏、对与偏见相关属性的狭隘关注、以模型性能平等为中心的群体公平性的主导地位,以及临床医生参与以改善人工智能公平性的有限整合。为了弥合这些差距,我们为未来的研究提出了可行的策略,以加速医疗保健领域人工智能公平性的发展,最终推动公平的医疗保健服务。