Veterans Affairs Palo Alto Health Care System, Palo Alto, California and Stanford University School of Medicine, Stanford, California, USA.
Department of Internal Medicine and Equity Research and Innovation Center, Yale School of Medicine, USA.
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
Understanding and quantifying biases when designing and implementing actionable approaches to increase fairness and inclusion is critical for artificial intelligence (AI) in biomedical applications.
In this Special Communication, we discuss how bias is introduced at different stages of the development and use of AI applications in biomedical sciences and health care. We describe various AI applications and their implications for fairness and inclusion in sections on 1) Bias in Data Source Landscapes, 2) Algorithmic Fairness, 3) Uncertainty in AI Predictions, 4) Explainable AI for Fairness and Equity, and 5) Sociological/Ethnographic Issues in Data and Results Representation.
We provide recommendations to address biases when developing and using AI in clinical applications.
These recommendations can be applied to informatics research and practice to foster more equitable and inclusive health care systems and research discoveries.
在设计和实施提高公平性和包容性的可操作方法时,理解和量化偏差对于生物医学应用中的人工智能(AI)至关重要。
在本专题通讯中,我们讨论了在生物医学科学和医疗保健中 AI 应用的开发和使用的不同阶段中如何引入偏差。我们在以下几个部分描述了各种 AI 应用及其对公平性和包容性的影响:1)数据源景观中的偏差,2)算法公平性,3)AI 预测中的不确定性,4)公平性和公平性的可解释 AI,以及 5)数据和结果表示中的社会学/人种学问题。
我们提供了在开发和使用 AI 进行临床应用时解决偏差的建议。
这些建议可应用于信息学研究和实践,以促进更公平和包容的医疗保健系统和研究发现。