Suppr超能文献

基于对比增强超声的深度学习用于实性肾实质肿瘤分类

Deep Learning for Classification of Solid Renal Parenchymal Tumors Using Contrast-Enhanced Ultrasound.

作者信息

Bai Yun, An Zi-Chen, Du Lian-Fang, Li Fan, Cai Ying-Yu

机构信息

Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.

Department of Ultrasound, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.

出版信息

J Imaging Inform Med. 2025 May 6. doi: 10.1007/s10278-025-01525-3.

Abstract

The purpose of this study is to assess the ability of deep learning models to classify different subtypes of solid renal parenchymal tumors using contrast-enhanced ultrasound (CEUS) images and to compare their classification performance. A retrospective study was conducted using CEUS images of 237 kidney tumors, including 46 angiomyolipomas (AML), 118 clear cell renal cell carcinomas (ccRCC), 48 papillary RCCs (pRCC), and 25 chromophobe RCCs (chRCC), collected from January 2017 to December 2019. Two deep learning models, based on the ResNet-18 and RepVGG architectures, were trained and validated to distinguish between these subtypes. The models' performance was assessed using sensitivity, specificity, positive predictive value, negative predictive value, F1 score, Matthews correlation coefficient, accuracy, area under the receiver operating characteristic curve (AUC), and confusion matrix analysis. Class activation mapping (CAM) was applied to visualize the specific regions that contributed to the models' predictions. The ResNet-18 and RepVGG-A0 models achieved an overall accuracy of 76.7% and 84.5% across all four subtypes. The AUCs for AML, ccRCC, pRCC, and chRCC were 0.832, 0.829, 0.806, and 0.795 for the ResNet-18 model, compared to 0.906, 0.911, 0.840, and 0.827 for the RepVGG-A0 model, respectively. The deep learning models could reliably differentiate between various histological subtypes of renal tumors using CEUS images in an objective and non-invasive manner.

摘要

本研究的目的是评估深度学习模型利用对比增强超声(CEUS)图像对实性肾实质肿瘤不同亚型进行分类的能力,并比较它们的分类性能。我们进行了一项回顾性研究,使用了2017年1月至2019年12月收集的237例肾肿瘤的CEUS图像,其中包括46例血管平滑肌脂肪瘤(AML)、118例透明细胞肾细胞癌(ccRCC)、48例乳头状肾细胞癌(pRCC)和25例嫌色细胞肾细胞癌(chRCC)。基于ResNet-18和RepVGG架构的两种深度学习模型被训练和验证,以区分这些亚型。使用敏感性、特异性、阳性预测值、阴性预测值、F1分数、马修斯相关系数、准确性、受试者工作特征曲线下面积(AUC)和混淆矩阵分析来评估模型的性能。应用类激活映射(CAM)来可视化对模型预测有贡献的特定区域。ResNet-18和RepVGG-A0模型在所有四种亚型上的总体准确率分别为76.7%和84.5%。ResNet-18模型对AML、ccRCC、pRCC和chRCC的AUC分别为0.832、0.829、0.806和0.795,而RepVGG-A0模型的相应AUC分别为0.906、0.911、0.840和0.827。深度学习模型能够以客观、无创的方式利用CEUS图像可靠地区分肾肿瘤的各种组织学亚型。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验