Abbasi Rashid, Amin Farhan, Alabrah Amerah, Choi Gyu Sang, Khan Salabat, Bin Heyat Md Belal, Iqbal Muhammad Shahid, Chen Huiling
College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, China.
School of Computer Science and Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea.
Sci Rep. 2025 Jul 9;15(1):24647. doi: 10.1038/s41598-025-09394-0.
Diabetic retinopathy (DR) is an age-related macular degeneration eye disease problem that causes pathological changes in the retinal neural and vascular system. Recently, fundus imaging is a popular technology and widely used for clinical diagnosis, diabetic retinopathy, etc. It is evident from the literature that image quality changes due to uneven illumination, pigmentation level effect, and camera sensitivity affect clinical performance, particularly in automated image analysis systems. In addition, low-quality retinal images make the subsequent precise segmentation a challenging task for the computer diagnosis of retinal images. Thus, in order to solve this issue, herein, we proposed an adaptive enhancement-based Deep Convolutional Neural Network (DCNN) model for diabetic retinopathy (DR). In our proposed model, we used an adaptive gamma enhancement matrix to optimize the color channels and contrast standardization used in images. The proposed model integrates quantile-based histogram equalization to expand the perceptibility of the fundus image. Our proposed model provides a remarkable improvement in fundus color images and can be used particularly for low-contrast quality images. We performed several experiments, and the efficiency is evaluated using a large public dataset named Messidor's. Our proposed model efficiently classifies a distinct group of retinal images. The average assessment score for the original and enhanced images is 0.1942 (standard deviation: 0.0799), Peak Signal-to-Noise Ratio (PSNR) 28.79, and Structural Similarity Index (SSIM) 0.71. The best classification accuracy is [Formula: see text], indicating that Convolutional Neural Networks (CNNs) and transfer learning are superior to traditional methods. The results show that the proposed model increases the contrast of a particular color image without altering its structural information.
糖尿病视网膜病变(DR)是一种与年龄相关的黄斑变性眼病问题,会导致视网膜神经和血管系统发生病理变化。最近,眼底成像技术很受欢迎,广泛应用于临床诊断、糖尿病视网膜病变等领域。从文献中可以明显看出,由于光照不均匀、色素沉着水平影响和相机灵敏度导致的图像质量变化会影响临床性能,特别是在自动图像分析系统中。此外,低质量的视网膜图像使后续的精确分割成为视网膜图像计算机诊断的一项具有挑战性的任务。因此,为了解决这个问题,我们在此提出了一种基于自适应增强的深度卷积神经网络(DCNN)模型用于糖尿病视网膜病变(DR)。在我们提出的模型中,我们使用自适应伽马增强矩阵来优化图像中使用的颜色通道和对比度标准化。所提出的模型集成了基于分位数的直方图均衡化以扩展眼底图像的感知能力。我们提出的模型在眼底彩色图像方面有显著改进,尤其可用于低对比度质量的图像。我们进行了多项实验,并使用一个名为梅西多(Messidor)的大型公共数据集来评估效率。我们提出的模型能够有效地对不同组的视网膜图像进行分类。原始图像和增强图像的平均评估分数为0.1942(标准差:0.0799),峰值信噪比(PSNR)为28.79,结构相似性指数(SSIM)为0.71。最佳分类准确率为[公式:见原文],表明卷积神经网络(CNN)和迁移学习优于传统方法。结果表明,所提出的模型在不改变特定彩色图像结构信息的情况下提高了其对比度。