Suppr超能文献

一种用于对超声图像中的甲状腺和乳腺病变进行分类的通用深度学习框架。

A generic deep learning framework to classify thyroid and breast lesions in ultrasound images.

机构信息

Department of Ultrasound, Pudong New Area People's Hospital affiliated to Shanghai University of Medicine and Health Sciences, Shanghai, China.

School of Computing, University of Buckingham, Buckingham, UK.

出版信息

Ultrasonics. 2021 Feb;110:106300. doi: 10.1016/j.ultras.2020.106300. Epub 2020 Nov 12.

Abstract

Breast and thyroid cancers are the two common cancers to affect women worldwide. Ultrasonography (US) is a commonly used non-invasive imaging modality to detect breast and thyroid cancers, but its clinical diagnostic accuracy for these cancers is controversial. Both thyroid and breast cancers share some similar high frequency ultrasound characteristics such as taller-than-wide shape ratio, hypo-echogenicity, and ill-defined margins. This study aims to develop an automatic scheme for classifying thyroid and breast lesions in ultrasound images using deep convolutional neural networks (DCNN). In particular, we propose a generic DCNN architecture with transfer learning and the same architectural parameter settings to train models for thyroid and breast cancers (TNet and BNet) respectively, and test the viability of such a generic approach with ultrasound images collected from clinical practices. In addition, the potentials of the thyroid model in learning the common features and its performance of classifying both breast and thyroid lesions are investigated. A retrospective dataset of 719 thyroid and 672 breast images captured from US machines of different makes between October 2016 and December 2018 is used in this study. Test results show that both TNet and BNet built on the same DCNN architecture have achieved good classification results (86.5% average accuracy for TNet and 89% for BNet). Furthermore, we used TNet to classify breast lesions and the model achieves sensitivity of 86.6% and specificity of 87.1%, indicating its capability in learning features commonly shared by thyroid and breast lesions. We further tested the diagnostic performance of the TNet model against that of three radiologists. The area under curve (AUC) for thyroid nodule classification is 0.861 (95% CI: 0.792-0.929) for the TNet model and 0.757-0.854 (95% CI: 0.658-0.934) for the three radiologists. The AUC for breast cancer classification is 0.875 (95% CI: 0.804-0.947) for the TNet model and 0.698-0.777 (95% CI: 0.593-0.872) for the radiologists, indicating the model's potential in classifying both breast and thyroid cancers with a higher level of accuracy than that of radiologists.

摘要

乳腺癌和甲状腺癌是全球范围内影响女性的两种常见癌症。超声检查(US)是一种常用的非侵入性成像方式,用于检测乳腺癌和甲状腺癌,但它对这些癌症的临床诊断准确性存在争议。甲状腺癌和乳腺癌有一些相似的高频超声特征,如高宽比、低回声和边界不清晰。本研究旨在开发一种使用深度卷积神经网络(DCNN)对超声图像中的甲状腺和乳腺病变进行分类的自动方案。特别是,我们提出了一种具有迁移学习和相同架构参数设置的通用 DCNN 架构,分别为甲状腺癌和乳腺癌(TNet 和 BNet)训练模型,并使用从临床实践中收集的超声图像测试这种通用方法的可行性。此外,还研究了甲状腺模型在学习共同特征方面的潜力及其对乳腺和甲状腺病变的分类性能。本研究使用了 2016 年 10 月至 2018 年 12 月期间从不同制造商的 US 机器捕获的 719 个甲状腺和 672 个乳腺超声图像的回顾性数据集。测试结果表明,基于相同 DCNN 架构构建的 TNet 和 BNet 均取得了良好的分类效果(TNet 的平均准确率为 86.5%,BNet 的准确率为 89%)。此外,我们使用 TNet 对乳腺病变进行分类,模型的灵敏度为 86.6%,特异性为 87.1%,表明其能够学习甲状腺和乳腺病变共有的特征。我们进一步测试了 TNet 模型与三位放射科医生的诊断性能。TNet 模型对甲状腺结节分类的曲线下面积(AUC)为 0.861(95%CI:0.792-0.929),三位放射科医生的 AUC 为 0.757-0.854(95%CI:0.658-0.934)。TNet 模型对乳腺癌分类的 AUC 为 0.875(95%CI:0.804-0.947),三位放射科医生的 AUC 为 0.698-0.777(95%CI:0.593-0.872),表明该模型在分类乳腺癌和甲状腺癌方面具有较高的准确性,优于放射科医生。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验