School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China.
School of Information Science and Engineering, Xinjiang University, Ürümqi 830046, China.
Sensors (Basel). 2020 Dec 27;21(1):122. doi: 10.3390/s21010122.
Cervical cancer is the fourth most common cancer in the world. Whole-slide images (WSIs) are an important standard for the diagnosis of cervical cancer. Missed diagnoses and misdiagnoses often occur due to the high similarity in pathological cervical images, the large number of readings, the long reading time, and the insufficient experience levels of pathologists. Existing models have insufficient feature extraction and representation capabilities, and they suffer from insufficient pathological classification. Therefore, this work first designs an image processing algorithm for data augmentation. Second, the deep convolutional features are extracted by fine-tuning pre-trained deep network models, including ResNet50 v2, DenseNet121, Inception v3, VGGNet19, and Inception-ResNet, and then local binary patterns and a histogram of the oriented gradient to extract traditional image features are used. Third, the features extracted by the fine-tuned models are serially fused according to the feature representation ability parameters and the accuracy of multiple experiments proposed in this paper, and spectral embedding is used for dimension reduction. Finally, the fused features are inputted into the Analysis of Variance-F value-Spectral Embedding Net (AF-SENet) for classification. There are four different pathological images of the dataset: normal, low-grade squamous intraepithelial lesion (LSIL), high-grade squamous intraepithelial lesion (HSIL), and cancer. The dataset is divided into a training set (90%) and a test set (10%). The serial fusion effect of the deep features extracted by Resnet50v2 and DenseNet121 () is the best, with average classification accuracy reaching 95.33%, which is 1.07% higher than ResNet50 v2 and 1.05% higher than DenseNet121. The recognition ability is significantly improved, especially in LSIL, reaching 90.89%, which is 2.88% higher than ResNet50 v2 and 2.1% higher than DenseNet121. Thus, this method significantly improves the accuracy and generalization ability of pathological cervical WSI recognition by fusing deep features.
宫颈癌是全球第四大常见癌症。全 slides 图像(WSIs)是宫颈癌诊断的重要标准。由于病理宫颈图像的高度相似性、大量的阅读量、较长的阅读时间以及病理学家经验水平的不足,经常会出现误诊和漏诊。现有的模型在特征提取和表示能力方面存在不足,在病理分类方面也存在不足。因此,这项工作首先设计了一种用于数据增强的图像处理算法。其次,通过微调预先训练的深度网络模型(包括 ResNet50 v2、DenseNet121、Inception v3、VGGNet19 和 Inception-ResNet)提取深度卷积特征,然后使用局部二值模式和方向梯度直方图提取传统图像特征。第三,根据本文提出的特征表示能力参数和多次实验精度,对微调模型提取的特征进行串行融合,并使用谱嵌入进行降维。最后,将融合特征输入到方差 F 值-谱嵌入网络(AF-SENet)中进行分类。该数据集有正常、低级别鳞状上皮内病变(LSIL)、高级别鳞状上皮内病变(HSIL)和癌症四种不同的病理图像。数据集分为训练集(90%)和测试集(10%)。Resnet50v2 和 DenseNet121 提取的深度特征串行融合效果最佳,平均分类准确率达到 95.33%,比 ResNet50 v2 高 1.07%,比 DenseNet121 高 1.05%。识别能力有显著提高,尤其是在 LSIL 中,达到 90.89%,比 ResNet50 v2 高 2.88%,比 DenseNet121 高 2.1%。因此,该方法通过融合深度特征,显著提高了病理性宫颈 WSI 识别的准确性和泛化能力。