Tang Yu-Xing, Tang You-Bao, Peng Yifan, Yan Ke, Bagheri Mohammadhadi, Redd Bernadette A, Brandon Catherine J, Lu Zhiyong, Han Mei, Xiao Jing, Summers Ronald M
1Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892 USA.
2National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD 20894 USA.
NPJ Digit Med. 2020 May 14;3:70. doi: 10.1038/s41746-020-0273-z. eCollection 2020.
As one of the most ubiquitous diagnostic imaging tests in medical practice, chest radiography requires timely reporting of potential findings and diagnosis of diseases in the images. Automated, fast, and reliable detection of diseases based on chest radiography is a critical step in radiology workflow. In this work, we developed and evaluated various deep convolutional neural networks (CNN) for differentiating between normal and abnormal frontal chest radiographs, in order to help alert radiologists and clinicians of potential abnormal findings as a means of work list triaging and reporting prioritization. A CNN-based model achieved an AUC of 0.9824 ± 0.0043 (with an accuracy of 94.64 ± 0.45%, a sensitivity of 96.50 ± 0.36% and a specificity of 92.86 ± 0.48%) for normal versus abnormal chest radiograph classification. The CNN model obtained an AUC of 0.9804 ± 0.0032 (with an accuracy of 94.71 ± 0.32%, a sensitivity of 92.20 ± 0.34% and a specificity of 96.34 ± 0.31%) for normal versus lung opacity classification. Classification performance on the external dataset showed that the CNN model is likely to be highly generalizable, with an AUC of 0.9444 ± 0.0029. The CNN model pre-trained on cohorts of adult patients and fine-tuned on pediatric patients achieved an AUC of 0.9851 ± 0.0046 for normal versus pneumonia classification. Pretraining with natural images demonstrates benefit for a moderate-sized training image set of about 8500 images. The remarkable performance in diagnostic accuracy observed in this study shows that deep CNNs can accurately and effectively differentiate normal and abnormal chest radiographs, thereby providing potential benefits to radiology workflow and patient care.
作为医学实践中最常用的诊断成像检查之一,胸部X光检查需要及时报告潜在的检查结果并诊断影像中的疾病。基于胸部X光片实现疾病的自动化、快速且可靠检测是放射学工作流程中的关键一步。在这项研究中,我们开发并评估了各种深度卷积神经网络(CNN),用于区分正常和异常的胸部正位X光片,以帮助放射科医生和临床医生警惕潜在的异常发现,作为工作列表分类和报告优先级排序的一种手段。基于CNN的模型在正常与异常胸部X光片分类中,AUC值达到0.9824±0.0043(准确率为94.64±0.45%,灵敏度为96.50±0.36%,特异性为92.86±0.48%)。该CNN模型在正常与肺部实变分类中,AUC值为0.9804±0.0032(准确率为94.71±0.32%,灵敏度为92.20±0.34%,特异性为96.34±0.31%)。在外部数据集上的分类性能表明,该CNN模型具有很高的通用性,AUC值为0.9444±0.0029。在成年患者队列上预训练并在儿科患者上微调的CNN模型在正常与肺炎分类中,AUC值达到0.9851±0.0046。使用自然图像进行预训练对约8500张图像的中等规模训练图像集有帮助。本研究中观察到的显著诊断准确性表现表明,深度CNN可以准确有效地区分正常和异常胸部X光片,从而为放射学工作流程和患者护理带来潜在益处。