Yang Yin, Chen Fei, Liang Hongmei, Bai Yun, Wang Zhen, Zhao Lei, Ma Sai, Niu Qinghua, Li Fan, Xie Tianwu, Cai Yingyu
Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
Department of Pediatrics, Jiahui International Hospital, Shanghai, China.
Front Oncol. 2023 Jun 2;13:1166988. doi: 10.3389/fonc.2023.1166988. eCollection 2023.
To investigate the feasibility and efficiency of automatic segmentation of contrast-enhanced ultrasound (CEUS) images in renal tumors by convolutional neural network (CNN) based models and their further application in radiomic analysis.
From 94 pathologically confirmed renal tumor cases, 3355 CEUS images were extracted and randomly divided into training set (3020 images) and test set (335 images). According to the histological subtypes of renal cell carcinoma, the test set was further split into clear cell renal cell carcinoma (ccRCC) set (225 images), renal angiomyolipoma (AML) set (77 images) and set of other subtypes (33 images). Manual segmentation was the gold standard and serves as ground truth. Seven CNN-based models including DeepLabV3+, UNet, UNet++, UNet3+, SegNet, MultilResUNet and Attention UNet were used for automatic segmentation. Python 3.7.0 and Pyradiomics package 3.0.1 were used for radiomic feature extraction. Performance of all approaches was evaluated by the metrics of mean intersection over union (mIOU), dice similarity coefficient (DSC), precision, and recall. Reliability and reproducibility of radiomics features were evaluated by the Pearson coefficient and the intraclass correlation coefficient (ICC).
All seven CNN-based models achieved good performance with the mIOU, DSC, precision and recall ranging between 81.97%-93.04%, 78.67%-92.70%, 93.92%-97.56%, and 85.29%-95.17%, respectively. The average Pearson coefficients ranged from 0.81 to 0.95, and the average ICCs ranged from 0.77 to 0.92. The UNet++ model showed the best performance with the mIOU, DSC, precision and recall of 93.04%, 92.70%, 97.43% and 95.17%, respectively. For ccRCC, AML and other subtypes, the reliability and reproducibility of radiomic analysis derived from automatically segmented CEUS images were excellent, with the average Pearson coefficients of 0.95, 0.96 and 0.96, and the average ICCs for different subtypes were 0.91, 0.93 and 0.94, respectively.
This retrospective single-center study showed that the CNN-based models had good performance on automatic segmentation of CEUS images for renal tumors, especially the UNet++ model. The radiomics features extracted from automatically segmented CEUS images were feasible and reliable, and further validation by multi-center research is necessary.
探讨基于卷积神经网络(CNN)的模型对肾肿瘤超声造影(CEUS)图像进行自动分割的可行性和效率,及其在放射组学分析中的进一步应用。
从94例经病理证实的肾肿瘤病例中提取3355幅CEUS图像,并随机分为训练集(3020幅图像)和测试集(335幅图像)。根据肾细胞癌的组织学亚型,测试集进一步分为透明细胞肾细胞癌(ccRCC)组(225幅图像)、肾血管平滑肌脂肪瘤(AML)组(77幅图像)和其他亚型组(33幅图像)。手动分割为金标准并作为真实对照。使用包括DeepLabV3 +、UNet、UNet ++、UNet3 +、SegNet、MultilResUNet和Attention UNet在内的7种基于CNN的模型进行自动分割。使用Python 3.7.0和Pyradiomics软件包3.0.1进行放射组学特征提取。所有方法的性能通过平均交并比(mIOU)、骰子相似系数(DSC)、精度和召回率指标进行评估。通过Pearson系数和组内相关系数(ICC)评估放射组学特征的可靠性和可重复性。
所有7种基于CNN的模型均取得了良好的性能,mIOU、DSC、精度和召回率分别在81.97% - 93.04%、78.67% - 92.70%、93.92% - 97.56%和85.29% - 95.17%之间。平均Pearson系数在0.81至0.95之间,平均ICC在0.77至0.92之间。UNet ++模型表现最佳,mIOU、DSC、精度和召回率分别为93.04%、92.70%、97.43%和95.17%。对于ccRCC、AML和其他亚型,从自动分割的CEUS图像中得出的放射组学分析的可靠性和可重复性极佳,平均Pearson系数分别为0.95、0.96和0.96,不同亚型的平均ICC分别为0.91、0.93和0.94。
这项回顾性单中心研究表明,基于CNN的模型在肾肿瘤CEUS图像的自动分割方面表现良好,尤其是UNet ++模型。从自动分割的CEUS图像中提取的放射组学特征是可行且可靠的,需要通过多中心研究进行进一步验证。