USIC&T, Guru Gobind Singh Indraprastha University, New Delhi 110078, India.
Department of ECE, Indore Institute of Science & Technology, Indore 453331, India.
Biomed Res Int. 2023 Apr 17;2023:4214817. doi: 10.1155/2023/4214817. eCollection 2023.
Cervical cancer is a critical imperilment to a female's health due to its malignancy and fatality rate. The disease can be thoroughly cured by locating and treating the infected tissues in the preliminary phase. The traditional practice for screening cervical cancer is the examination of cervix tissues using the Papanicolaou (Pap) test. Manual inspection of pap smears involves false-negative outcomes due to human error even in the presence of the infected sample. Automated computer vision diagnosis revamps this obstacle and plays a substantial role in screening abnormal tissues affected due to cervical cancer. Here, in this paper, we propose a hybrid deep feature concatenated network (HDFCN) following two-step data augmentation to detect cervical cancer for binary and multiclass classification on the Pap smear images. This network carries out the classification of malignant samples for whole slide images (WSI) of the openly accessible SIPaKMeD database by utilizing the concatenation of features extracted from the fine-tuning of the deep learning (DL) models, namely, VGG-16, ResNet-152, and DenseNet-169, pretrained on the ImageNet dataset. The performance outcomes of the proposed model are compared with the individual performances of the aforementioned DL networks using transfer learning (TL). Our proposed model achieved an accuracy of 97.45% and 99.29% for 5-class and 2-class classifications, respectively. Additionally, the experiment is performed to classify liquid-based cytology (LBC) WSI data containing pap smear images.
宫颈癌是女性健康的重大威胁,因为它具有恶性和致死率。在疾病的早期阶段,通过定位和治疗受感染的组织,宫颈癌可以被完全治愈。传统的宫颈癌筛查方法是使用巴氏涂片(Pap)检查来检查宫颈组织。即使存在受感染的样本,人工检查巴氏涂片也会由于人为错误而导致假阴性结果。自动化计算机视觉诊断解决了这一障碍,并在筛查因宫颈癌而异常的组织方面发挥了重要作用。在这里,我们提出了一种混合深度特征连接网络(HDFCN),它采用两步数据增强方法来检测巴氏涂片图像上的宫颈癌,用于二进制和多类分类。该网络通过利用从预先在 ImageNet 数据集上训练的深度学习(DL)模型,即 VGG-16、ResNet-152 和 DenseNet-169 中提取的特征的连接,对公开可用的 SIPaKMeD 数据库的全幻灯片图像(WSI)中的恶性样本进行分类。将所提出的模型的性能结果与使用迁移学习(TL)的上述 DL 网络的单独性能进行了比较。我们提出的模型在 5 类和 2 类分类中分别达到了 97.45%和 99.29%的准确率。此外,还对包含巴氏涂片图像的液基细胞学(LBC)WSI 数据进行了分类实验。