From the Department of Radiology, Thomas Jefferson University Hospital, Sidney Kimmel Jefferson Medical College, 132 S 10th St, Room 1080A, Main Building, Philadelphia, PA 19107-5244.
Radiology. 2017 Aug;284(2):574-582. doi: 10.1148/radiol.2017162326. Epub 2017 Apr 24.
Purpose To evaluate the efficacy of deep convolutional neural networks (DCNNs) for detecting tuberculosis (TB) on chest radiographs. Materials and Methods Four deidentified HIPAA-compliant datasets were used in this study that were exempted from review by the institutional review board, which consisted of 1007 posteroanterior chest radiographs. The datasets were split into training (68.0%), validation (17.1%), and test (14.9%). Two different DCNNs, AlexNet and GoogLeNet, were used to classify the images as having manifestations of pulmonary TB or as healthy. Both untrained and pretrained networks on ImageNet were used, and augmentation with multiple preprocessing techniques. Ensembles were performed on the best-performing algorithms. For cases where the classifiers were in disagreement, an independent board-certified cardiothoracic radiologist blindly interpreted the images to evaluate a potential radiologist-augmented workflow. Receiver operating characteristic curves and areas under the curve (AUCs) were used to assess model performance by using the DeLong method for statistical comparison of receiver operating characteristic curves. Results The best-performing classifier had an AUC of 0.99, which was an ensemble of the AlexNet and GoogLeNet DCNNs. The AUCs of the pretrained models were greater than that of the untrained models (P < .001). Augmenting the dataset further increased accuracy (P values for AlexNet and GoogLeNet were .03 and .02, respectively). The DCNNs had disagreement in 13 of the 150 test cases, which were blindly reviewed by a cardiothoracic radiologist, who correctly interpreted all 13 cases (100%). This radiologist-augmented approach resulted in a sensitivity of 97.3% and specificity 100%. Conclusion Deep learning with DCNNs can accurately classify TB at chest radiography with an AUC of 0.99. A radiologist-augmented approach for cases where there was disagreement among the classifiers further improved accuracy. RSNA, 2017.
目的 评估深度卷积神经网络(DCNN)在胸部 X 光片上检测结核病(TB)的功效。
材料与方法 本研究使用了 4 个符合 HIPAA 标准的匿名数据集,这些数据集免于机构审查委员会的审查,其中包括 1007 张后前位胸部 X 光片。数据集分为训练集(68.0%)、验证集(17.1%)和测试集(14.9%)。使用两种不同的 DCNN,AlexNet 和 GoogLeNet,将图像分类为有或没有肺结核表现。使用未经训练和在 ImageNet 上预训练的网络,并使用多种预处理技术进行扩充。对表现最佳的算法进行集成。对于分类器存在分歧的情况,由一名独立的经过董事会认证的心胸放射科医生对图像进行盲法解读,以评估潜在的放射科增强工作流程。使用 DeLong 方法进行统计比较,通过受试者工作特征曲线和曲线下面积(AUC)来评估模型性能。
结果 表现最佳的分类器的 AUC 为 0.99,这是 AlexNet 和 GoogLeNet DCNN 的集成。预训练模型的 AUC 大于未训练模型(P<0.001)。进一步扩充数据集可以提高准确性(AlexNet 和 GoogLeNet 的 P 值分别为<.03 和<.02)。在 150 个测试病例中,DCNN 有 13 个存在分歧,由一名心胸放射科医生进行盲法解读,该医生正确解读了所有 13 个病例(100%)。这种放射科增强方法的敏感性为 97.3%,特异性为 100%。
结论 DCNN 的深度学习可以准确地对胸部 X 光片上的 TB 进行分类,AUC 为 0.99。对于分类器存在分歧的情况,采用放射科医生增强的方法可以进一步提高准确性。RSNA,2017 年。