Nneji Grace Ugochi, Cai Jingye, Deng Jianhua, Monday Happy Nkanta, Hossin Md Altab, Nahar Saifun
School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
Diagnostics (Basel). 2022 Feb 19;12(2):540. doi: 10.3390/diagnostics12020540.
It is a well-known fact that diabetic retinopathy (DR) is one of the most common causes of visual impairment between the ages of 25 and 74 around the globe. Diabetes is caused by persistently high blood glucose levels, which leads to blood vessel aggravations and vision loss. Early diagnosis can minimise the risk of proliferated diabetic retinopathy, which is the advanced level of this disease, and having higher risk of severe impairment. Therefore, it becomes important to classify DR stages. To this effect, this paper presents a weighted fusion deep learning network (WFDLN) to automatically extract features and classify DR stages from fundus scans. The proposed framework aims to treat the issue of low quality and identify retinopathy symptoms in fundus images. Two channels of fundus images, namely, the contrast-limited adaptive histogram equalization (CLAHE) fundus images and the contrast-enhanced canny edge detection (CECED) fundus images are processed by WFDLN. Fundus-related features of CLAHE images are extracted by fine-tuned Inception V3, whereas the features of CECED fundus images are extracted using fine-tuned VGG-16. Both channels' outputs are merged in a weighted approach, and softmax classification is used to determine the final recognition result. Experimental results show that the proposed network can identify the DR stages with high accuracy. The proposed method tested on the Messidor dataset reports an accuracy level of 98.5%, sensitivity of 98.9%, and specificity of 98.0%, whereas on the Kaggle dataset, the proposed model reports an accuracy level of 98.0%, sensitivity of 98.7%, and specificity of 97.8%. Compared with other models, our proposed network achieves comparable performance.
众所周知,糖尿病视网膜病变(DR)是全球25至74岁人群视力损害的最常见原因之一。糖尿病是由持续高血糖水平引起的,这会导致血管病变和视力丧失。早期诊断可以将增殖性糖尿病视网膜病变(该疾病的晚期阶段,严重损害风险更高)的风险降至最低。因此,对糖尿病视网膜病变阶段进行分类变得很重要。为此,本文提出了一种加权融合深度学习网络(WFDLN),用于从眼底扫描中自动提取特征并对糖尿病视网膜病变阶段进行分类。所提出的框架旨在解决低质量问题并识别眼底图像中的视网膜病变症状。WFDLN处理眼底图像的两个通道,即对比度受限自适应直方图均衡化(CLAHE)眼底图像和对比度增强的Canny边缘检测(CECED)眼底图像。CLAHE图像的眼底相关特征通过微调的Inception V3提取,而CECED眼底图像的特征使用微调的VGG-16提取。两个通道的输出以加权方式合并,并使用softmax分类来确定最终识别结果。实验结果表明,所提出的网络能够高精度地识别糖尿病视网膜病变阶段。在Messidor数据集上测试的所提出方法报告的准确率为98.5%,灵敏度为98.9%,特异性为98.0%,而在Kaggle数据集上,所提出的模型报告的准确率为98.0%,灵敏度为98.7%,特异性为97.8%。与其他模型相比,我们提出的网络取得了相当的性能。