Suppr超能文献

用于乳腺钼靶乳腺癌分类的具有优化特征融合的多尺度视觉Transformer

Multi-Scale Vision Transformer with Optimized Feature Fusion for Mammographic Breast Cancer Classification.

作者信息

Ahmed Soaad, Elazab Naira, El-Gayar Mostafa M, Elmogy Mohammed, Fouda Yasser M

机构信息

Computer Science Division, Mathematics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt.

Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt.

出版信息

Diagnostics (Basel). 2025 May 28;15(11):1361. doi: 10.3390/diagnostics15111361.

Abstract

: Breast cancer remains one of the leading causes of mortality among women worldwide, highlighting the critical need for accurate and efficient diagnostic methods. : Traditional deep learning models often struggle with feature redundancy, suboptimal feature fusion, and inefficient selection of discriminative features, leading to limitations in classification performance. To address these challenges, we propose a new deep learning framework that leverages MAX-ViT for multi-scale feature extraction, ensuring robust and hierarchical representation learning. A gated attention fusion module (GAFM) is introduced to dynamically integrate the extracted features, enhancing the discriminative power of the fused representation. Additionally, we employ Harris Hawks optimization (HHO) for feature selection, reducing redundancy and improving classification efficiency. Finally, XGBoost is utilized for classification, taking advantage of its strong generalization capabilities. : We evaluate our model on the King Abdulaziz University Mammogram Dataset, categorized based on BI-RADS classifications. Experimental results demonstrate the effectiveness of our approach, achieving 98.2% for accuracy, 98.0% for precision, 98.1% for recall, 98.0% for F1-score, 98.9% for the area under the curve (AUC), and 95% for the Matthews correlation coefficient (MCC), outperforming existing state-of-the-art models. : These results validate the robustness of our fusion-based framework in improving breast cancer diagnosis and classification.

摘要

乳腺癌仍然是全球女性死亡的主要原因之一,这凸显了对准确高效诊断方法的迫切需求。传统的深度学习模型常常在特征冗余、特征融合欠佳以及判别性特征选择效率低下等方面存在问题,导致分类性能受限。为应对这些挑战,我们提出了一种新的深度学习框架,该框架利用MAX-ViT进行多尺度特征提取,确保稳健的分层表示学习。引入了门控注意力融合模块(GAFM)来动态整合提取的特征,增强融合表示的判别力。此外,我们采用哈里斯鹰优化算法(HHO)进行特征选择,减少冗余并提高分类效率。最后,利用XGBoost进行分类,发挥其强大的泛化能力。我们在基于BI-RADS分类的阿卜杜勒阿齐兹国王大学乳房X光数据集上评估我们的模型。实验结果证明了我们方法的有效性,准确率达到98.2%,精确率达到98.0%,召回率达到98.1%,F1分数达到98.0%,曲线下面积(AUC)达到98.9%,马修斯相关系数(MCC)达到95%,优于现有的最先进模型。这些结果验证了我们基于融合的框架在改善乳腺癌诊断和分类方面的稳健性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验