Yang Yunfeng, Guan Chen
Department of Mathematics and Statistics, Northeast Petroleum University, Daqing, China.
J Xray Sci Technol. 2022;30(1):33-44. doi: 10.3233/XST-210982.
The accurately automatic classification of medical pathological images has always been an important problem in the field of deep learning. However, the traditional manual extraction of features and image classification usually requires in-depth knowledge and more professional researchers to extract and calculate high-quality image features. This kind of operation generally takes a lot of time and the classification effect is not ideal. In order to solve these problems, this study proposes and tests an improved network model DenseNet-201-MSD to accomplish the task of classification of medical pathological images of breast cancer. First, the image is preprocessed, and the traditional pooling layer is replaced by multiple scaling decomposition to prevent overfitting due to the large dimension of the image data set. Second, the BN algorithm is added before the activation function Softmax and Adam is used in the optimizer to optimize performance of the network model and improve image recognition accuracy of the network model. By verifying the performance of the model using the BreakHis dataset, the new deep learning model yields image classification accuracy of 99.4%, 98.8%, 98.2%and 99.4%when applying to four different magnifications of pathological images, respectively. The study results demonstrate that this new classification method and deep learning model can effectively improve accuracy of pathological image classification, which indicates its potential value in future clinical application.
医学病理图像的精确自动分类一直是深度学习领域的一个重要问题。然而,传统的手动特征提取和图像分类通常需要深入的知识以及更专业的研究人员来提取和计算高质量的图像特征。这种操作通常需要大量时间,而且分类效果并不理想。为了解决这些问题,本研究提出并测试了一种改进的网络模型DenseNet-201-MSD,以完成乳腺癌医学病理图像的分类任务。首先,对图像进行预处理,用多次缩放分解代替传统的池化层,以防止由于图像数据集维度大而导致的过拟合。其次,在激活函数Softmax之前添加BN算法,并在优化器中使用Adam来优化网络模型的性能,提高网络模型的图像识别准确率。通过使用BreakHis数据集验证模型的性能,新的深度学习模型在应用于四种不同放大倍数的病理图像时,图像分类准确率分别为99.4%、98.8%、98.2%和99.4%。研究结果表明,这种新的分类方法和深度学习模型能够有效提高病理图像分类的准确率,这表明其在未来临床应用中的潜在价值。