Suppr超能文献

基于超声图像和射频数据信息融合的联合卷积神经网络甲状腺结节识别

Thyroid nodule recognition using a joint convolutional neural network with information fusion of ultrasound images and radiofrequency data.

机构信息

National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University, 1066 Xueyuan Road, Nanshan District, Shenzhen, 518055, Guangdong, People's Republic of China.

State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Department of Ultrasound, Sun Yat-sen University Cancer Center, 651 Dongfeng East Road, Guangzhou, 510060, Guangdong, People's Republic of China.

出版信息

Eur Radiol. 2021 Jul;31(7):5001-5011. doi: 10.1007/s00330-020-07585-z. Epub 2021 Jan 6.

Abstract

OBJECTIVE

To develop a deep learning-based method with information fusion of US images and RF signals for better classification of thyroid nodules (TNs).

METHODS

One hundred sixty-three pairs of US images and RF signals of TNs from a cohort of adult patients were used for analysis. We developed an information fusion-based joint convolutional neural network (IF-JCNN) for the differential diagnosis of malignant and benign TNs. The IF-JCNN contains two branched CNNs for deep feature extraction: one for US images and the other one for RF signals. The extracted features are fused at the backend of IF-JCNN for TN classification.

RESULTS

Across 5-fold cross-validation, the accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) obtained by using the IF-JCNN with both US images and RF signals as inputs for TN classification were respectively 0.896 (95% CI 0.838-0.938), 0.885 (95% CI 0.804-0.941), 0.910 (95% CI 0.815-0.966), and 0.956 (95% CI 0.926-0.987), which were better than those obtained by using only US images: 0.822 (0.755-0.878; p = 0.0044), 0.792 (0.679-0.868, p = 0.0091), 0.866 (0.760-0.937, p = 0.197), and 0.901 (0.855-0.948, p = .0398), or RF signals: 0.767 (0.694-0.829, p < 0.001), 0.781 (0.685-0.859, p = 0.0037), 0.746 (0.625-0.845, p < 0.001), 0.845 (0.786-0.903, p < 0.001).

CONCLUSIONS

The proposed IF-JCNN model filled the gap of just using US images in CNNs to characterize TNs, and it may serve as a promising tool for assisting the diagnosis of thyroid cancer.

KEY POINTS

• Raw radiofrequency signals before ultrasound imaging of thyroid nodules provide useful information that is not carried by ultrasound images. • The information carried by raw radiofrequency signals and ultrasound images for thyroid nodules is complementary. • The performance of deep convolutional neural network for diagnosing thyroid nodules can be significantly improved by fusing US images and RF signals in the model as compared with just using US images.

摘要

目的

开发一种基于深度学习的方法,融合 US 图像和 RF 信号信息,以更好地区分甲状腺结节(TNs)。

方法

我们使用来自成年患者队列的 163 对 TNs 的 US 图像和 RF 信号进行分析。我们开发了一种基于信息融合的联合卷积神经网络(IF-JCNN),用于恶性和良性 TNs 的鉴别诊断。IF-JCNN 包含两个分支的 CNN 用于深度特征提取:一个用于 US 图像,另一个用于 RF 信号。提取的特征在 IF-JCNN 的后端融合用于 TN 分类。

结果

在 5 折交叉验证中,使用 IF-JCNN 同时使用 US 图像和 RF 信号作为输入进行 TN 分类的准确性、敏感度、特异性和接受者操作特征曲线(AUROC)下面积分别为 0.896(95%CI 0.838-0.938)、0.885(95%CI 0.804-0.941)、0.910(95%CI 0.815-0.966)和 0.956(95%CI 0.926-0.987),优于仅使用 US 图像的结果:0.822(0.755-0.878;p=0.0044)、0.792(0.679-0.868,p=0.0091)、0.866(0.760-0.937,p=0.197)和 0.901(0.855-0.948,p=0.0398),或 RF 信号:0.767(0.694-0.829,p<0.001)、0.781(0.685-0.859,p=0.0037)、0.746(0.625-0.845,p<0.001)、0.845(0.786-0.903,p<0.001)。

结论

所提出的 IF-JCNN 模型填补了仅使用 CNN 中的 US 图像来表征 TNs 的空白,它可能成为辅助甲状腺癌诊断的有前途的工具。

关键点

  • 甲状腺结节超声成像前的原始射频信号提供了超声图像未携带的有用信息。

  • 甲状腺结节的原始射频信号和超声图像所携带的信息是互补的。

  • 通过在模型中融合 US 图像和 RF 信号,与仅使用 US 图像相比,深度卷积神经网络诊断甲状腺结节的性能可以显著提高。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验