Hacisoftaoglu Recep E, Karakaya Mahmut, Sallam Ahmed B
Dept. of Computer Science, University of Central Arkansas, Conway, AR, 72035, USA.
Jones Eye Institute, University of Arkansas for Medical Sciences, Little Rock, AR 72205, USA.
Pattern Recognit Lett. 2020 Jul;135:409-417. doi: 10.1016/j.patrec.2020.04.009. Epub 2020 May 13.
Diabetic Retinopathy (DR) may result in various degrees of vision loss and even blindness if not diagnosed in a timely manner. Therefore, having an annual eye exam helps early detection to prevent vision loss in earlier stages, especially for diabetic patients. Recent technological advances made smartphone-based retinal imaging systems available on the market to perform small-sized, low-powered, and affordable DR screening in diverse environments. However, the accuracy of DR detection depends on the field of view and image quality. Since smartphone-based retinal imaging systems have much more compact designs than a traditional fundus camera, captured images are likely to be the low quality with a smaller field of view. Our motivation in this paper is to develop an automatic DR detection model for smartphone-based retinal images using the deep learning approach with the ResNet50 network. This study first utilized the well-known AlexNet, GoogLeNet, and ResNet50 architectures, using the transfer learning approach. Second, these frameworks were retrained with retina images from several datasets including EyePACS, Messidor, IDRiD, and Messidor-2 to investigate the effect of using images from the single, cross, and multiple datasets. Third, the proposed ResNet50 model is applied to smartphone-based synthetic images to explore the DR detection accuracy of smartphone-based retinal imaging systems. Based on the vision-threatening diabetic retinopathy detection results, the proposed approach achieved a high classification accuracy of 98.6%, with a 98.2% sensitivity and a 99.1% specificity while its AUC was 0.9978 on the independent test dataset. As the main contributions, DR detection accuracy was improved using the deep transfer learning approach for the ResNet50 network with publicly available datasets and the effect of the field of view in smartphone-based retinal imaging was studied. Although a smaller number of images were used in the training set compared with the existing studies, considerably acceptable high accuracies for validation and testing data were obtained.
糖尿病视网膜病变(DR)如果不及时诊断,可能会导致不同程度的视力丧失甚至失明。因此,每年进行一次眼部检查有助于早期发现,从而在早期阶段预防视力丧失,尤其是对于糖尿病患者。最近的技术进步使基于智能手机的视网膜成像系统得以在市场上出现,以便在各种环境中进行小型、低功耗且经济实惠的DR筛查。然而,DR检测的准确性取决于视野和图像质量。由于基于智能手机的视网膜成像系统的设计比传统眼底相机紧凑得多,所拍摄的图像可能质量较低且视野较小。本文的动机是使用带有ResNet50网络的深度学习方法,为基于智能手机的视网膜图像开发一种自动DR检测模型。本研究首先采用迁移学习方法,利用著名的AlexNet、GoogLeNet和ResNet50架构。其次,使用包括EyePACS、Messidor、IDRiD和Messidor-2在内的多个数据集的视网膜图像对这些框架进行重新训练,以研究使用单个、交叉和多个数据集图像的效果。第三,将所提出的ResNet50模型应用于基于智能手机的合成图像,以探索基于智能手机的视网膜成像系统的DR检测准确性。基于威胁视力的糖尿病视网膜病变检测结果,所提出的方法在独立测试数据集上实现了98.6%的高分类准确率,灵敏度为98.2%,特异性为99.1%,其AUC为0.9978。作为主要贡献,使用深度迁移学习方法对带有公开可用数据集的ResNet50网络提高了DR检测准确性,并研究了基于智能手机的视网膜成像中视野的影响。尽管与现有研究相比,训练集中使用的图像数量较少,但验证和测试数据仍获得了相当可观的高准确率。