Ashurov Asadulla, Chelloug Samia Allaoua, Tselykh Alexey, Muthanna Mohammed Saleh Ali, Muthanna Ammar, Al-Gaashani Mehdhar S A M
School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia.
Life (Basel). 2023 Sep 21;13(9):1945. doi: 10.3390/life13091945.
Breast cancer, a leading cause of female mortality worldwide, poses a significant health challenge. Recent advancements in deep learning techniques have revolutionized breast cancer pathology by enabling accurate image classification. Various imaging methods, such as mammography, CT, MRI, ultrasound, and biopsies, aid in breast cancer detection. Computer-assisted pathological image classification is of paramount importance for breast cancer diagnosis. This study introduces a novel approach to breast cancer histopathological image classification. It leverages modified pre-trained CNN models and attention mechanisms to enhance model interpretability and robustness, emphasizing localized features and enabling accurate discrimination of complex cases. Our method involves transfer learning with deep CNN models-Xception, VGG16, ResNet50, MobileNet, and DenseNet121-augmented with the convolutional block attention module (CBAM). The pre-trained models are finetuned, and the two CBAM models are incorporated at the end of the pre-trained models. The models are compared to state-of-the-art breast cancer diagnosis approaches and tested for accuracy, precision, recall, and F1 score. The confusion matrices are used to evaluate and visualize the results of the compared models. They help in assessing the models' performance. The test accuracy rates for the attention mechanism (AM) using the Xception model on the "BreakHis" breast cancer dataset are encouraging at 99.2% and 99.5%. The test accuracy for DenseNet121 with AMs is 99.6%. The proposed approaches also performed better than previous approaches examined in the related studies.
乳腺癌是全球女性死亡的主要原因之一,对健康构成重大挑战。深度学习技术的最新进展通过实现准确的图像分类,彻底改变了乳腺癌病理学。各种成像方法,如乳房X光检查、CT、MRI、超声和活检,有助于乳腺癌的检测。计算机辅助病理图像分类对乳腺癌诊断至关重要。本研究介绍了一种乳腺癌组织病理学图像分类的新方法。它利用改进的预训练CNN模型和注意力机制来增强模型的可解释性和鲁棒性,强调局部特征并能够准确区分复杂病例。我们的方法涉及使用深度CNN模型(Xception、VGG16、ResNet50、MobileNet和DenseNet121)进行迁移学习,并通过卷积块注意力模块(CBAM)进行增强。对预训练模型进行微调,并将两个CBAM模型合并到预训练模型的末尾。将这些模型与最先进的乳腺癌诊断方法进行比较,并测试其准确性、精确性、召回率和F1分数。使用混淆矩阵来评估和可视化比较模型的结果。它们有助于评估模型的性能。在“BreakHis”乳腺癌数据集上,使用Xception模型的注意力机制(AM)的测试准确率令人鼓舞,分别为99.2%和99.5%。带有AM的DenseNet121的测试准确率为99.6%。所提出的方法也比相关研究中检查的先前方法表现更好。