Pu Lucas, Beale Oliver, Meng Xin
Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15260, USA.
Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA.
Bioengineering (Basel). 2025 Feb 6;12(2):157. doi: 10.3390/bioengineering12020157.
Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults. Early detection is crucial to reducing DR-related vision loss risk but is fraught with challenges. Manual detection is labor-intensive and often misses tiny DR lesions, necessitating automated detection.
We aimed to develop and validate an annotation-free deep learning strategy for the automatic detection of exudates and bleeding spots on color fundus photography (CFP) images and ultrawide field (UWF) retinal images.
Three cohorts were created: two CFP cohorts (Kaggle-CFP and E-Ophtha) and one UWF cohort. Kaggle-CFP was used for algorithm development, while E-Ophtha, with manually annotated DR-related lesions, served as the independent test set. For additional independent testing, 50 DR-positive cases from both the Kaggle-CFP and UWF cohorts were manually outlined for bleeding and exudate spots. The remaining cases were used for algorithm training. A multiscale contrast-based shape descriptor transformed DR-verified retinal images into contrast fields. High-contrast regions were identified, and local image patches from abnormal and normal areas were extracted to train a U-Net model. Model performance was evaluated using sensitivity and false positive rates based on manual annotations in the independent test sets.
Our trained model on the independent CFP cohort achieved high sensitivities for detecting and segmenting DR lesions: microaneurysms (91.5%, 9.04 false positives per image), hemorrhages (92.6%, 2.26 false positives per image), hard exudates (92.3%, 7.72 false positives per image), and soft exudates (90.7%, 0.18 false positives per image). For UWF images, the model's performance varied by lesion size. Bleeding detection sensitivity increased with lesion size, from 41.9% (6.48 false positives per image) for the smallest spots to 93.4% (5.80 false positives per image) for the largest. Exudate detection showed high sensitivity across all sizes, ranging from 86.9% (24.94 false positives per image) to 96.2% (6.40 false positives per image), though false positive rates were higher for smaller lesions.
Our experiments demonstrate the feasibility of training a deep learning neural network for detecting and segmenting DR-related lesions without relying on their manual annotations.
糖尿病性视网膜病变(DR)是工作年龄成年人失明的主要原因。早期检测对于降低与DR相关的视力丧失风险至关重要,但充满挑战。人工检测劳动强度大,且常常遗漏微小的DR病变,因此需要自动检测。
我们旨在开发并验证一种无需标注的深度学习策略,用于自动检测彩色眼底照片(CFP)图像和超广角(UWF)视网膜图像上的渗出物和出血点。
创建了三个队列:两个CFP队列(Kaggle-CFP和E-Ophtha)和一个UWF队列。Kaggle-CFP用于算法开发,而带有手动标注的与DR相关病变的E-Ophtha用作独立测试集。为了进行额外的独立测试,对来自Kaggle-CFP和UWF队列的50例DR阳性病例的出血和渗出物点进行了手动勾勒。其余病例用于算法训练。基于多尺度对比度的形状描述符将经过DR验证的视网膜图像转换为对比度场。识别出高对比度区域,并从异常和正常区域提取局部图像块来训练U-Net模型。基于独立测试集中的手动标注,使用灵敏度和假阳性率评估模型性能。
我们在独立的CFP队列上训练的模型在检测和分割DR病变方面具有较高的灵敏度:微动脉瘤(91.5%,每张图像9.04个假阳性)、出血(92.6%,每张图像2.26个假阳性)、硬性渗出物(92.3%,每张图像7.72个假阳性)和软性渗出物(90.7%,每张图像0.18个假阳性)。对于UWF图像,模型的性能因病变大小而异。出血检测灵敏度随病变大小增加,从最小斑点的41.9%(每张图像6.48个假阳性)到最大斑点的93.4%(每张图像5.80个假阳性)。渗出物检测在所有大小的病变中均显示出高灵敏度,范围从86.9%(每张图像24.94个假阳性)到96.2%(每张图像6.40个假阳性),不过较小病变的假阳性率更高。
我们的实验证明了训练一个深度学习神经网络来检测和分割与DR相关病变而不依赖其手动标注的可行性。