Department of Radiology, Emory University, Atlanta, GA, USA.
School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.
Lancet Digit Health. 2022 Jun;4(6):e406-e414. doi: 10.1016/S2589-7500(22)00063-2. Epub 2022 May 11.
Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images.
Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race.
In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study.
The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging.
National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.
先前的医学影像学研究表明,人工智能(AI)在检测人种方面的能力存在差异,但在解读影像时,人类专家并不能明显看出人种对医学影像的影响。我们旨在全面评估 AI 从医学影像中识别患者种族身份的能力。
我们使用私有(埃默里 X 光、埃默里胸部 CT、埃默里颈椎和埃默里乳房 X 光)和公共数据集(MIMIC-X 光、CheXpert、国家肺癌筛查试验、RSNA 肺栓塞 CT 和 Digital Hand Atlas),首先评估深度学习模型从医学图像中检测种族的性能量化,包括这些模型在外部环境和多种成像方式下的泛化能力。其次,我们通过评估这些假设混杂因素在孤立状态下检测种族的能力,以及通过在按这些假设混杂变量分层的数据集上测试深度学习模型,来评估解剖和表型人群特征的混杂可能造成的影响。最后,通过探索图像损坏对模型性能的影响,我们研究了 AI 模型识别种族的潜在机制。
在我们的研究中,我们表明,标准的 AI 深度学习模型可以通过在多种成像方式下进行训练,从而从医学图像中预测种族,性能很高(X 光成像 [受试者工作特征曲线下面积(AUC)范围 0.91-0.99]、胸部 CT 成像 [0.87-0.96] 和乳房 X 光成像 [0.81])。我们还表明,这种检测不是由于种族的替代物或与成像相关的替代变量(例如,可能混杂因素的性能:体重指数 [AUC 0.55]、疾病分布 [0.61] 和乳房密度 [0.61])造成的。最后,我们提供的证据表明,即使在图像的所有解剖区域和频谱中,AI 深度学习模型的能力也保持不变,这表明当不希望出现这种行为时,控制这种行为将具有挑战性,需要进一步研究。
我们研究的结果强调,AI 深度学习模型预测自我报告种族的能力本身并不是重要的问题。然而,我们发现 AI 甚至可以从经过损坏、裁剪和噪点处理的医学图像中准确预测自我报告的种族,即使是临床专家也无法做到这一点,这给医学成像中的所有模型部署带来了巨大的风险。
美国国立卫生研究院 MIDRC 赠款、美国国立卫生研究院美国国家科学基金会、美国国立卫生研究院国家图书馆和台湾科技部。