Suppr超能文献

深度学习方法在前后位和后前位胸部 X 线片中的自动分类。

Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs.

机构信息

The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.

Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA.

出版信息

J Digit Imaging. 2019 Dec;32(6):925-930. doi: 10.1007/s10278-019-00208-0.

Abstract

Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN's performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.

摘要

确保正确标注射线照片视图对于机器学习算法的开发以及来自多个机构的研究的质量控制非常重要。本研究的目的是开发和测试一种用于自动对前后位(AP)或后前位(PA)胸部射线照片(CXR)进行分类的深度卷积神经网络(DCNN)的性能。我们从 NIH ChestX-ray14 数据库中获得了 112120 张 CXR,这是一个公开的 CXR 数据库,在成人(106179 张(95%))和儿科(5941 张(5%))患者中进行,包括 44810 张(40%)AP 和 67310 张(60%)PA 视图。使用 CXR 来训练、验证和测试用于将射线照片分类为前后位的 ResNet-18 DCNN。以相同的方式使用仅儿科 CXR(2885 张(49%)AP 和 3056 张(51%)PA)开发了第二个 DCNN。使用接收器工作特征(ROC)曲线和曲线下面积(AUC)以及标准诊断措施来评估 DCNN 在测试数据集上的性能。在整个 CXR 数据集和儿科 CXR 数据集上训练的 DCNN 的 AUC 分别为 1.0 和 0.997,准确性分别为 99.6%和 98%,用于区分 AP 和 PA CXR。对于在整个数据集上训练的 DCNN,灵敏度和特异性分别为 99.6%和 99.5%,对于在儿科数据集上训练的 DCNN,灵敏度和特异性均为 98%。两个算法之间观察到的性能差异没有统计学意义(p=0.17)。我们的 DCNN 对分类前后位 CXR 的方向具有很高的准确性,当训练数据集减少 95%时,性能仅略有下降。DCNN 快速分类 CXR 可以促进机器学习和质量保证目的的大型图像数据集的注释。

相似文献

引用本文的文献

本文引用的文献

1
Artificial Intelligence and Radiology: Collaboration Is Key.人工智能与放射学:合作是关键。
J Am Coll Radiol. 2018 May;15(5):781-783. doi: 10.1016/j.jacr.2017.12.037. Epub 2018 Feb 2.
4
A survey on deep learning in medical image analysis.深度学习在医学图像分析中的应用研究综述。
Med Image Anal. 2017 Dec;42:60-88. doi: 10.1016/j.media.2017.07.005. Epub 2017 Jul 26.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验