Department of Electronics and Communication Engineering, BMS Institute of Technology Management, Bengaluru 560064, India.
Crit Rev Biomed Eng. 2022;50(2):1-19. doi: 10.1615/CritRevBiomedEng.2022043417.
Many researchers have developed computer-assisted diagnostic (CAD) methods to diagnose breast cancer using histopathology microscopic images. These techniques help to improve the accuracy of biopsy diagnosis with hematoxylin and eosin-stained images. On the other hand, most CAD systems usually rely on inefficient and time-consuming manual feature extraction methods. Using a deep learning (DL) model with convolutional layers, we present a method to extract the most useful pictorial information for breast cancer classification. Breast biopsy images stained with hematoxylin and eosin can be categorized into four groups namely benign lesions, normal tissue, carcinoma in situ, and invasive carcinoma. To correctly classify different types of breast cancer, it is important to classify histopathological images accurately. The MobileNet architecture model is used to obtain high accuracy with less resource utilization. The proposed model is fast, inexpensive, and safe due to which it is suitable for the detection of breast cancer at an early stage. This lightweight deep neural network can be accelerated using field-programmable gate arrays for the detection of breast cancer. DL has been implemented to successfully classify breast cancer. The model uses categorical cross-entropy to learn to give the correct class a high probability and other classes a low probability. It is used in the classification stage of the convolutional neural network (CNN) after the clustering stage, thereby improving the performance of the proposed system. To measure training and validation accuracy, the model was trained on Google Colab for 280 epochs with a powerful GPU with 2496 CUDA cores, 12 GB GDDR5 VRAM, and 12.6 GB RAM. Our results demonstrate that deep CNN with a chi-square test has improved the accuracy of histopathological image classification of breast cancer by greater than 11% compared with other state-of-the-art methods.
许多研究人员已经开发了计算机辅助诊断 (CAD) 方法,以使用组织病理学显微镜图像诊断乳腺癌。这些技术有助于提高苏木精和伊红染色图像的活检诊断准确性。另一方面,大多数 CAD 系统通常依赖于效率低下且耗时的手动特征提取方法。我们使用具有卷积层的深度学习 (DL) 模型,提出了一种方法来提取用于乳腺癌分类的最有用的图像信息。苏木精和伊红染色的乳腺活检图像可以分为四类,即良性病变、正常组织、原位癌和浸润性癌。为了正确分类不同类型的乳腺癌,重要的是要准确分类组织病理学图像。MobileNet 架构模型用于以较少的资源利用率获得高精度。由于该模型快速、廉价且安全,因此适用于早期乳腺癌的检测。这种轻量级深度神经网络可以使用现场可编程门阵列加速,用于乳腺癌检测。DL 已成功用于分类乳腺癌。该模型使用分类交叉熵来学习为正确的类别赋予高概率,为其他类别赋予低概率。它在聚类阶段之后的卷积神经网络 (CNN) 的分类阶段使用,从而提高了所提出系统的性能。为了衡量训练和验证准确性,该模型在具有 2496 个 CUDA 内核、12GB GDDR5 VRAM 和 12.6GB RAM 的强大 GPU 上在 Google Colab 上进行了 280 个时期的训练。我们的结果表明,与其他最先进的方法相比,具有卡方检验的深度 CNN 提高了乳腺癌组织病理学图像分类的准确性超过 11%。