Mohsen Farida, Belhaouari Samir, Shah Zubair
College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.
Sci Rep. 2025 Aug 21;15(1):30706. doi: 10.1038/s41598-025-14944-7.
Diabetic retinopathy is a serious ocular complication that poses a significant threat to patients' vision and overall health. Early detection and accurate grading are essential to prevent vision loss. Current automatic grading methods rely heavily on deep learning applied to retinal fundus images, but the complex, irregular patterns of lesions in these images, which vary in shape and distribution, make it difficult to capture the subtle changes. This study introduces RadFuse, a multi-representation deep learning framework that integrates non-linear RadEx-transformed sinogram images with traditional fundus images to enhance diabetic retinopathy detection and grading. Our RadEx transformation, an optimized non-linear extension of the Radon transform, generates sinogram representations to capture complex retinal lesion patterns. By leveraging both spatial and transformed domain information, RadFuse enriches the feature set available to deep learning models, improving the differentiation of severity levels. We conducted extensive experiments on two benchmark datasets, APTOS-2019 and DDR, using three convolutional neural networks (CNNs): ResNeXt-50, MobileNetV2, and VGG19. RadFuse showed significant improvements over fundus-image-only models across all three CNN architectures and outperformed state-of-the-art methods on both datasets. For severity grading across five stages, RadFuse achieved a quadratic weighted kappa of 93.24%, an accuracy of 87.07%, and an F1-score of 87.17%. In binary classification between healthy and diabetic retinopathy cases, the method reached an accuracy of 99.09%, precision of 98.58%, and recall of 99.64%, surpassing previously established models. These results demonstrate RadFuse's capacity to capture complex non-linear features, advancing diabetic retinopathy classification and promoting the integration of advanced mathematical transforms in medical image analysis. The source code will be available at https://github.com/Farida-Ali/RadEx-Transform/tree/main .
糖尿病视网膜病变是一种严重的眼部并发症,对患者的视力和整体健康构成重大威胁。早期检测和准确分级对于预防视力丧失至关重要。当前的自动分级方法严重依赖应用于视网膜眼底图像的深度学习,但这些图像中病变的复杂、不规则模式,其形状和分布各不相同,使得难以捕捉细微变化。本研究引入了RadFuse,这是一种多表征深度学习框架,它将非线性RadEx变换的正弦图图像与传统眼底图像相结合,以增强糖尿病视网膜病变的检测和分级。我们的RadEx变换是拉东变换的优化非线性扩展,生成正弦图表示以捕捉复杂的视网膜病变模式。通过利用空间和变换域信息,RadFuse丰富了深度学习模型可用的特征集,提高了严重程度级别的区分能力。我们使用三个卷积神经网络(CNN):ResNeXt - 50、MobileNetV2和VGG19,在两个基准数据集APTOS - 2019和DDR上进行了广泛实验。在所有三种CNN架构中,RadFuse相对于仅使用眼底图像的模型都有显著改进,并且在两个数据集上均优于现有最先进的方法。对于五个阶段的严重程度分级,RadFuse的二次加权kappa为93.24%,准确率为87.07%,F1分数为87.17%。在健康与糖尿病视网膜病变病例的二元分类中,该方法的准确率为99.09%,精确率为98.58%,召回率为99.64%,超过了先前建立的模型。这些结果证明了RadFuse捕捉复杂非线性特征的能力,推动了糖尿病视网膜病变分类,并促进了先进数学变换在医学图像分析中的整合。源代码将在https://github.com/Farida - Ali/RadEx - Transform/tree/main上提供。