Suppr超能文献

使用放射组学和迁移学习的多模态特征融合增强乳腺癌诊断

Enhanced Breast Cancer Diagnosis Using Multimodal Feature Fusion with Radiomics and Transfer Learning.

作者信息

Maruf Nazmul Ahasan, Basuhail Abdullah, Ramzan Muhammad Umair

机构信息

Faculty of Computing and Information Technology, Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia.

出版信息

Diagnostics (Basel). 2025 Aug 28;15(17):2170. doi: 10.3390/diagnostics15172170.

Abstract

Breast cancer remains a critical public health problem worldwide and is a leading cause of cancer-related mortality. Optimizing clinical outcomes is contingent upon the early and precise detection of malignancies. Advances in medical imaging and artificial intelligence (AI), particularly in the fields of radiomics and deep learning (DL), have contributed to improvements in early detection methodologies. Nonetheless, persistent challenges, including limited data availability, model overfitting, and restricted generalization, continue to hinder performance. This study aims to overcome existing challenges by improving model accuracy and robustness through enhanced data augmentation and the integration of radiomics and deep learning features from the CBIS-DDSM dataset. To mitigate overfitting and improve model generalization, data augmentation techniques were applied. The PyRadiomics library was used to extract radiomics features, while transfer learning models were employed to derive deep learning features from the augmented training dataset. For radiomics feature selection, we compared multiple supervised feature selection methods, including RFE with random forest and logistic regression, ANOVA F-test, LASSO, and mutual information. Embedded methods with XGBoost, LightGBM, and CatBoost for GPUs were also explored. Finally, we integrated radiomics and deep features to build a unified multimodal feature space for improved classification performance. Based on this integrated set of radiomics and deep learning features, 13 pre-trained transfer learning models were trained and evaluated, including various versions of ResNet (50, 50V2, 101, 101V2, 152, 152V2), DenseNet (121, 169, 201), InceptionV3, MobileNet, and VGG (16, 19). Among the evaluated models, ResNet152 achieved the highest classification accuracy of 97%, demonstrating the potential of this approach to enhance diagnostic precision. Other models, including VGG19, ResNet101V2, and ResNet101, achieved 96% accuracy, emphasizing the importance of the selected feature set in achieving robust detection. Future research could build on this work by incorporating Vision Transformer (ViT) architectures and leveraging multimodal data (e.g., clinical data, genomic information, and patient history). This could improve predictive performance and make the model more robust and adaptable to diverse data types. Ultimately, this approach has the potential to transform breast cancer detection, making it more accurate and interpretable.

摘要

乳腺癌仍然是全球一个关键的公共卫生问题,并且是癌症相关死亡率的主要原因。优化临床结果取决于对恶性肿瘤的早期精确检测。医学成像和人工智能(AI)的进展,特别是在放射组学和深度学习(DL)领域,有助于改进早期检测方法。尽管如此,包括数据可用性有限、模型过拟合和泛化受限在内的持续挑战,仍然阻碍着性能提升。本研究旨在通过增强数据增强以及整合来自CBIS-DDSM数据集的放射组学和深度学习特征来提高模型准确性和鲁棒性,从而克服现有挑战。为了减轻过拟合并提高模型泛化能力,应用了数据增强技术。使用PyRadiomics库提取放射组学特征,同时采用迁移学习模型从增强的训练数据集中提取深度学习特征。对于放射组学特征选择,我们比较了多种监督特征选择方法,包括带随机森林和逻辑回归的递归特征消除(RFE)、方差分析F检验、套索回归(LASSO)和互信息。还探索了用于GPU的XGBoost、LightGBM和CatBoost的嵌入式方法。最后,我们整合放射组学和深度特征以构建统一的多模态特征空间,以提高分类性能。基于这组整合的放射组学和深度学习特征,训练并评估了13个预训练的迁移学习模型,包括各种版本的ResNet(50、50V2、101、101V2、152、152V2)、DenseNet(121、169、201)、InceptionV3、MobileNet和VGG(16、19)。在评估的模型中,ResNet152实现了97%的最高分类准确率,证明了这种方法提高诊断精度的潜力。其他模型,包括VGG19、ResNet101V2和ResNet101,准确率达到96%,强调了所选特征集在实现稳健检测中的重要性。未来的研究可以通过纳入视觉Transformer(ViT)架构并利用多模态数据(例如临床数据、基因组信息和患者病史)来在此基础上开展工作。这可以提高预测性能,并使模型更稳健且能适应各种数据类型。最终,这种方法有可能改变乳腺癌检测,使其更准确且可解释。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3e38/12428243/c900bd733640/diagnostics-15-02170-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验