Department of Radiology, Emory University, Atlanta, United States.
Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States.
Br J Radiol. 2023 Oct;96(1150):20230023. doi: 10.1259/bjr.20230023. Epub 2023 Sep 12.
Various forms of artificial intelligence (AI) applications are being deployed and used in many healthcare systems. As the use of these applications increases, we are learning the failures of these models and how they can perpetuate bias. With these new lessons, we need to prioritize bias evaluation and mitigation for radiology applications; all the while not ignoring the impact of changes in the larger enterprise AI deployment which may have downstream impact on performance of AI models. In this paper, we provide an updated review of known pitfalls causing AI bias and discuss strategies for mitigating these biases within the context of AI deployment in the larger healthcare enterprise. We describe these pitfalls by framing them in the larger AI lifecycle from problem definition, data set selection and curation, model training and deployment emphasizing that bias exists across a spectrum and is a sequela of a combination of both human and machine factors.
各种形式的人工智能 (AI) 应用正在许多医疗保健系统中部署和使用。随着这些应用的使用增加,我们正在了解这些模型的失败之处,以及它们如何延续偏见。有了这些新的教训,我们需要优先考虑对放射学应用进行偏见评估和缓解;同时不要忽视更大的企业人工智能部署变化的影响,这可能会对人工智能模型的性能产生下游影响。在本文中,我们提供了一个关于导致人工智能偏见的已知缺陷的最新回顾,并讨论了在更大的医疗保健企业中部署人工智能时减轻这些偏见的策略。我们通过将这些缺陷框架在从问题定义、数据集选择和策展、模型训练和部署的更大的人工智能生命周期中,来描述这些缺陷,强调偏见存在于一个范围内,是人类和机器因素共同作用的后果。