Deng Heng, Huang Wenjun, Zhou Xiuxiu, Zhou Taohu, Fan Li, Liu Shiyuan
School of Medicine, Shanghai University, Shanghai, China.
Department of Radiology, The Second People's Hospital of Deyang, Deyang, Sichuan, China.
Front Oncol. 2024 Oct 9;14:1447132. doi: 10.3389/fonc.2024.1447132. eCollection 2024.
The purpose of this study was to develop and validate a new feature fusion algorithm to improve the classification performance of benign and malignant ground-glass nodules (GGNs) based on deep learning.
We retrospectively collected 385 cases of GGNs confirmed by surgical pathology from three hospitals. We utilized 239 GGNs from Hospital 1 as the training and internal validation set, and 115 and 31 GGNs from Hospital 2 and Hospital 3, respectively, as external test sets 1 and 2. Among these GGNs, 172 were benign and 203 were malignant. First, we evaluated clinical and morphological features of GGNs at baseline chest CT and simultaneously extracted whole-lung radiomics features. Then, deep convolutional neural networks (CNNs) and backpropagation neural networks (BPNNs) were applied to extract deep features from whole-lung CT images, clinical, morphological features, and whole-lung radiomics features separately. Finally, we integrated these four types of deep features using an attention mechanism. Multiple metrics were employed to evaluate the predictive performance of the model.
The deep learning model integrating clinical, morphological, radiomics and whole lung CT image features with attention mechanism (CMRI-AM) achieved the best performance, with area under the curve (AUC) values of 0.941 (95% CI: 0.898-0.972), 0.861 (95% CI: 0.823-0.882), and 0.906 (95% CI: 0.878-0.932) on the internal validation set, external test set 1, and external test set 2, respectively. The AUC differences between the CMRI-AM model and other feature combination models were statistically significant in all three groups (all p<0.05).
Our experimental results demonstrated that (1) applying attention mechanism to fuse whole-lung CT images, radiomics features, clinical, and morphological features is feasible, (2) clinical, morphological, and radiomics features provide supplementary information for the classification of benign and malignant GGNs based on CT images, and (3) utilizing baseline whole-lung CT features to predict the benign and malignant of GGNs is an effective method. Therefore, optimizing the fusion of baseline whole-lung CT features can effectively improve the classification performance of GGNs.
本研究旨在开发并验证一种新的特征融合算法,以基于深度学习提高磨玻璃结节(GGN)良恶性的分类性能。
我们回顾性收集了来自三家医院的385例经手术病理证实的GGN病例。我们将医院1的239例GGN用作训练和内部验证集,将医院2的115例GGN和医院3的31例GGN分别用作外部测试集1和2。在这些GGN中,172例为良性,203例为恶性。首先,我们在基线胸部CT上评估GGN的临床和形态学特征,并同时提取全肺影像组学特征。然后,应用深度卷积神经网络(CNN)和反向传播神经网络(BPNN)分别从全肺CT图像、临床、形态学特征和全肺影像组学特征中提取深度特征。最后,我们使用注意力机制整合这四种类型的深度特征。采用多种指标评估模型的预测性能。
基于注意力机制整合临床、形态学、影像组学和全肺CT图像特征的深度学习模型(CMRI-AM)表现最佳,在内部验证集、外部测试集1和外部测试集2上的曲线下面积(AUC)值分别为0.941(95%CI:0.898-0.972)、0.861(95%CI:0.823-0.882)和0.906(95%CI:0.878-0.932)。CMRI-AM模型与其他特征组合模型在所有三组中的AUC差异均具有统计学意义(均p<0.05)。
我们的实验结果表明:(1)应用注意力机制融合全肺CT图像、影像组学特征、临床和形态学特征是可行的;(2)临床、形态学和影像组学特征为基于CT图像的GGN良恶性分类提供了补充信息;(3)利用基线全肺CT特征预测GGN的良恶性是一种有效的方法。因此,优化基线全肺CT特征的融合可以有效提高GGN的分类性能。