Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India.
Renewable Energy Lab, Department of Electrical and Electronics Engineering, KPR Institute of Engineering and Technology, Avinashi road, Coimbatore, 641407 Tamilnadu, India.
Biomed Res Int. 2021 May 4;2021:5584004. doi: 10.1155/2021/5584004. eCollection 2021.
Traditional screening of cervical cancer type classification majorly depends on the pathologist's experience, which also has less accuracy. Colposcopy is a critical component of cervical cancer prevention. In conjunction with precancer screening and treatment, colposcopy has played an essential role in lowering the incidence and mortality from cervical cancer over the last 50 years. However, due to the increase in workload, vision screening causes misdiagnosis and low diagnostic efficiency. Medical image processing using the convolutional neural network (CNN) model shows its superiority for the classification of cervical cancer type in the field of deep learning. This paper proposes two deep learning CNN architectures to detect cervical cancer using the colposcopy images; one is the VGG19 (TL) model, and the other is CYENET. In the CNN architecture, VGG19 is adopted as a transfer learning for the studies. A new model is developed and termed as the Colposcopy Ensemble Network (CYENET) to classify cervical cancers from colposcopy images automatically. The accuracy, specificity, and sensitivity are estimated for the developed model. The classification accuracy for VGG19 was 73.3%. Relatively satisfied results are obtained for VGG19 (TL). From the kappa score of the VGG19 model, we can interpret that it comes under the category of moderate classification. The experimental results show that the proposed CYENET exhibited high sensitivity, specificity, and kappa scores of 92.4%, 96.2%, and 88%, respectively. The classification accuracy of the CYENET model is improved as 92.3%, which is 19% higher than the VGG19 (TL) model.
传统的宫颈癌分类筛查主要依赖于病理学家的经验,其准确性也较低。阴道镜检查是宫颈癌预防的重要组成部分。结合癌前筛查和治疗,阴道镜检查在过去 50 年中对降低宫颈癌的发病率和死亡率发挥了重要作用。然而,由于工作量的增加,视觉筛查导致误诊和诊断效率低下。使用卷积神经网络(CNN)模型的医学图像处理在深度学习领域显示出其在宫颈癌类型分类中的优越性。本文提出了两种基于深度学习的 CNN 架构,用于使用阴道镜图像检测宫颈癌;一种是 VGG19(TL)模型,另一种是 CYENET。在 CNN 架构中,VGG19 被用作迁移学习的研究。开发了一种新的模型,称为阴道镜集成网络(CYENET),用于自动对阴道镜图像进行宫颈癌分类。对开发的模型进行了准确性、特异性和敏感性的估计。VGG19 的分类准确率为 73.3%。VGG19(TL)的结果相对令人满意。从 VGG19 模型的 Kappa 评分可以看出,它属于中等分类。实验结果表明,所提出的 CYENET 表现出高灵敏度、特异性和 Kappa 评分分别为 92.4%、96.2%和 88%。CYENET 模型的分类准确率提高到 92.3%,比 VGG19(TL)模型高 19%。