Du Lingyan, Tang Guozhi, Che Yue, Ling Shihai, Chen Xin, Pan Xingliang
School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, People's Republic of China.
Intelligent Perception and Control Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Yibin, People's Republic of China.
Med Phys. 2025 Jul;52(7):e17901. doi: 10.1002/mp.17901. Epub 2025 May 20.
The accurate classification of lung nodules is critical to achieving personalized lung cancer treatment and prognosis prediction. The treatment options for lung cancer and the prognosis of patients are closely related to the type of lung nodules, but there are many types of lung nodules, and the distinctions between certain types are subtle, making accurate classification based on traditional medical imaging technology and doctor experience challenging.
In this study, a novel method was used to analyze quantitative features in CT images using CT radiomics to reveal the characteristics of pulmonary nodules, and then feature fusion was used to integrate radiomics features and deep learning features to improve the accuracy of classification.
This paper proposes a fusion feature pulmonary nodule classification method that fuses radiomics features with deep learning neural network features, aiming to automatically classify different types of pulmonary nodules (such as Malignancy, Calcification, Spiculation, Lobulation, Margin, and Texture). By introducing the Discriminant Correlation Analysis feature fusion algorithm, the method maximizes the complementarity between the two types of features and the differences between different classes. This ensures interaction between the information, effectively utilizing the complementary characteristics of the features. The LIDC-IDRI dataset is used for training, and the fusion feature model has been validated for its advantages and effectiveness in classifying multiple types of pulmonary nodules.
The experimental results show that the fusion feature model outperforms the single-feature model in all classification tasks. The AUCs for the tasks of classifying Calcification, Lobulation, Margin, Spiculation, Texture, and Malignancy reached 0.9663, 0.8113, 0.8815, 0.8140, 0.9010, and 0.9316, respectively. In tasks such as nodule calcification and texture classification, the fusion feature model significantly improved the recognition ability of minority classes.
The fusion of radiomics features and deep learning neural network features can effectively enhance the overall performance of pulmonary nodule classification models while also improving the recognition of minority classes when there is a significant class imbalance.
肺结节的准确分类对于实现肺癌的个性化治疗和预后预测至关重要。肺癌的治疗方案以及患者的预后与肺结节的类型密切相关,但肺结节类型众多,某些类型之间的区别细微,这使得基于传统医学影像技术和医生经验进行准确分类具有挑战性。
在本研究中,使用了一种新方法,利用CT影像组学分析CT图像中的定量特征以揭示肺结节的特征,然后使用特征融合将影像组学特征和深度学习特征相结合,以提高分类的准确性。
本文提出了一种融合特征的肺结节分类方法,该方法将影像组学特征与深度学习神经网络特征相融合,旨在自动对不同类型的肺结节(如恶性、钙化、毛刺、分叶、边缘和纹理)进行分类。通过引入判别相关分析特征融合算法,该方法最大化了两种类型特征之间的互补性以及不同类别之间的差异。这确保了信息之间的交互,有效利用了特征的互补特性。使用LIDC-IDRI数据集进行训练,融合特征模型在对多种类型肺结节进行分类方面的优势和有效性已得到验证。
实验结果表明,融合特征模型在所有分类任务中均优于单特征模型。钙化、分叶、边缘、毛刺、纹理和恶性分类任务的AUC分别达到0.9663、0.8113、0.8815、0.8140、0.9010和0.9316。在结节钙化和纹理分类等任务中,融合特征模型显著提高了少数类别的识别能力。
影像组学特征与深度学习神经网络特征的融合可以有效提高肺结节分类模型的整体性能,同时在存在显著类别不平衡的情况下也能提高对少数类别的识别能力。