Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea .
Retina. 2022 Oct 1;42(10):1889-1896. doi: 10.1097/IAE.0000000000003550.
We aimed to develop a deep learning model for detecting and localizing retinal breaks in ultrawidefield fundus (UWF) images.
We retrospectively enrolled treatment-naive patients diagnosed with retinal break or rhegmatogenous retinal detachment and who had UWF images. The YOLO v3 architecture backbone was used to develop the model, using transfer learning. The performance of the model was evaluated using per-image classification and per-object detection.
Overall, 4,505 UWF images from 940 patients were used in the current study. Among them, 306 UWF images from 84 patients were included in the test set. In per-object detection, the average precision for the object detection model considering every retinal break was 0.840. With the best threshold, the overall precision, recall, and F1 score were 0.6800, 0.9189, and 0.7816, respectively. In the per-image classification, the model showed an area under the receiver operating characteristic curve of 0.957 within the test set. The overall accuracy, sensitivity, and specificity in the test data set were 0.9085, 0.8966, and 0.9158, respectively.
The UWF image-based deep learning model evaluated in the current study performed well in diagnosing and locating retinal breaks.
我们旨在开发一种用于检测和定位超广角眼底(UWF)图像中视网膜裂孔的深度学习模型。
我们回顾性地招募了未经治疗的、被诊断为视网膜裂孔或孔源性视网膜脱离的患者,并收集了他们的 UWF 图像。该模型使用了 YOLO v3 架构的主干,采用了迁移学习。通过图像分类和目标检测评估模型的性能。
本研究共使用了 940 名患者的 4505 张 UWF 图像,其中 84 名患者的 306 张 UWF 图像被纳入测试集。在目标检测方面,考虑每个视网膜裂孔的目标检测模型的平均精度为 0.840。在最佳阈值下,整体精度、召回率和 F1 得分为 0.6800、0.9189 和 0.7816。在图像分类方面,模型在测试集中的接收器操作特征曲线下面积为 0.957。在测试数据集的整体准确率、敏感度和特异性分别为 0.9085、0.8966 和 0.9158。
本研究评估的基于 UWF 图像的深度学习模型在诊断和定位视网膜裂孔方面表现良好。