Suppr超能文献

基于病灶感知的卷积神经网络在胸片分类中的应用。

Lesion-aware convolutional neural network for chest radiograph classification.

机构信息

School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China.

School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China.

出版信息

Clin Radiol. 2021 Feb;76(2):155.e1-155.e14. doi: 10.1016/j.crad.2020.08.027. Epub 2020 Oct 16.

Abstract

AIM

To investigate the performance of a deep-learning approach termed lesion-aware convolutional neural network (LACNN) to identify 14 different thoracic diseases on chest X-rays (CXRs).

MATERIALS AND METHODS

In total, 10,738 CXRs of 3,526 patients were collected retrospectively. Of these, 1,937 CXRs of 598 patients were selected for training and optimising the lesion-detection network (LDN) of LACNN. The remaining 8,801 CXRs from 2,928 patients were used to train and test the classification network of LACNN. The discriminative performance of the deep-learning approach was compared with that obtained by the radiologists. In addition, its generalisation was validated on the independent public dataset, ChestX-ray14. The decision-making process of the model was visualised by occlusion testing, and the effect of the integration of CXRs and non-image data on model performance was also investigated. In a systematic evaluation, F1 score, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics were calculated.

RESULTS

The model generated statistically significantly higher AUC performance compared with radiologists on atelectasis, mass, and nodule, with AUC values of 0.831 (95% confidence interval [CI]: 0.807-0.855), 0.959 (95% CI: 0.944-0.974), and 0.928 (95% CI: 0.906-0.950), respectively. For the other 11 pathologies, there were no statistically significant differences. The average time to complete each CXR classification in the testing dataset was substantially longer for the radiologists (∼35 seconds) than for the LACNN (∼0.197 seconds). In the ChestX-ray14 dataset, the present model also showed competitive performance in comparison with other state-of-the-art deep-learning approaches. Model performance was slightly improved when introducing non-image data.

CONCLUSION

The proposed LACNN achieved radiologist-level performance in identifying thoracic diseases on CXRs, and could potentially expand patient access to CXR diagnostics.

摘要

目的

研究一种名为病灶感知卷积神经网络(LACNN)的深度学习方法在胸部 X 光片(CXR)上识别 14 种不同胸部疾病的性能。

材料与方法

回顾性收集了 10738 例 3526 例患者的 CXR。其中,598 例患者的 1937 例 CXR 被选择用于训练和优化 LACNN 的病灶检测网络(LDN)。其余 2928 例患者的 8801 例 CXR 用于训练和测试 LACNN 的分类网络。将深度学习方法的判别性能与放射科医生的结果进行比较。此外,还在独立的公共数据集 ChestX-ray14 上验证了其泛化能力。通过遮挡测试可视化模型的决策过程,并研究了 CXR 和非图像数据的集成对模型性能的影响。在系统评估中,计算了 F1 评分、敏感性、特异性和受试者工作特征曲线(ROC)下的面积(AUC)等指标。

结果

与放射科医生相比,该模型在肺不张、肿块和结节方面的 AUC 性能明显更高,AUC 值分别为 0.831(95%置信区间 [CI]:0.807-0.855)、0.959(95% CI:0.944-0.974)和 0.928(95% CI:0.906-0.950)。对于其他 11 种病变,两者之间无统计学差异。在测试数据集上,每位放射科医生完成每张 CXR 分类的平均时间明显长于 LACNN(~35 秒),耗时约 0.197 秒。在 ChestX-ray14 数据集上,与其他最先进的深度学习方法相比,本模型也表现出了有竞争力的性能。引入非图像数据后,模型性能略有提高。

结论

所提出的 LACNN 在识别 CXR 上的胸部疾病方面达到了放射科医生的水平,可以潜在地扩大患者对 CXR 诊断的获取途径。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验