eVida Research Group, University of Deusto, Bilbao, 48007, Spain.
Bioaraba Health Research Institute, Oncology Diagnostics and Therapeutics Area, Department of Pathological Anatomy, University Hospital of Alava, Vitoria, 01009, Spain.
Sci Rep. 2022 Sep 16;12(1):15600. doi: 10.1038/s41598-022-19278-2.
Breast cancer is a common malignancy and a leading cause of cancer-related deaths in women worldwide. Its early diagnosis can significantly reduce the morbidity and mortality rates in women. To this end, histopathological diagnosis is usually followed as the gold standard approach. However, this process is tedious, labor-intensive, and may be subject to inter-reader variability. Accordingly, an automatic diagnostic system can assist to improve the quality of diagnosis. This paper presents a deep learning approach to automatically classify hematoxylin-eosin-stained breast cancer microscopy images into normal tissue, benign lesion, in situ carcinoma, and invasive carcinoma using our collected dataset. Our proposed model exploited six intermediate layers of the Xception (Extreme Inception) network to retrieve robust and abstract features from input images. First, we optimized the proposed model on the original (unnormalized) dataset using 5-fold cross-validation. Then, we investigated its performance on four normalized datasets resulting from Reinhard, Ruifrok, Macenko, and Vahadane stain normalization. For original images, our proposed framework yielded an accuracy of 98% along with a kappa score of 0.969. Also, it achieved an average AUC-ROC score of 0.998 as well as a mean AUC-PR value of 0.995. Specifically, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. For normalized images, the proposed architecture performed better for Makenko normalization compared to the other three techniques. In this case, the proposed model achieved an accuracy of 97.79% together with a kappa score of 0.965. Also, it attained an average AUC-ROC score of 0.997 and a mean AUC-PR value of 0.991. Especially, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. These results demonstrate that our proposed model outperformed the baseline AlexNet as well as state-of-the-art VGG16, VGG19, Inception-v3, and Xception models with their default settings. Furthermore, it can be inferred that although stain normalization techniques offered competitive performance, they could not surpass the results of the original dataset.
乳腺癌是一种常见的恶性肿瘤,也是全球女性癌症相关死亡的主要原因。早期诊断可以显著降低女性的发病率和死亡率。为此,通常采用组织病理学诊断作为金标准方法。然而,这一过程繁琐、劳动强度大,并且可能受到读者间差异的影响。因此,自动诊断系统可以帮助提高诊断质量。本文提出了一种深度学习方法,利用我们收集的数据集,将苏木精-伊红染色的乳腺癌显微镜图像自动分类为正常组织、良性病变、原位癌和浸润性癌。我们提出的模型利用 Xception(极限 inception)网络的六个中间层从输入图像中提取稳健和抽象的特征。首先,我们使用 5 折交叉验证在原始(未归一化)数据集上优化了所提出的模型。然后,我们研究了它在四个经过 Reinhard、Ruifrok、Macenko 和 Vahadane 染色归一化处理后的归一化数据集上的性能。对于原始图像,我们提出的框架的准确率为 98%,kappa 分数为 0.969。此外,它的平均 AUC-ROC 分数为 0.998,平均 AUC-PR 值为 0.995。具体来说,对于原位癌和浸润性癌,它的灵敏度分别为 96%和 99%。对于归一化图像,与其他三种技术相比,Makenko 归一化方法使所提出的架构的性能更好。在这种情况下,所提出的模型的准确率为 97.79%,kappa 分数为 0.965。此外,它的平均 AUC-ROC 分数为 0.997,平均 AUC-PR 值为 0.991。特别是对于原位癌和浸润性癌,它的灵敏度分别为 96%和 99%。这些结果表明,与基线 AlexNet 以及最先进的 VGG16、VGG19、Inception-v3 和 Xception 模型相比,所提出的模型表现更好,并且具有默认设置。此外,可以推断出尽管染色归一化技术提供了有竞争力的性能,但它们无法超越原始数据集的结果。