Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA.
Department of Ophthalmology, Duke University, Durham, NC, USA.
Transl Vis Sci Technol. 2021 May 3;10(6):30. doi: 10.1167/tvst.10.6.30.
This study aims to meet a growing need for a fully automated, learning-based interpretation tool for retinal images obtained remotely (e.g. teleophthalmology) through different imaging modalities that may include imperfect (uninterpretable) images.
A retrospective study of 1148 optical coherence tomography (OCT) and color fundus photography (CFP) retinal images obtained using Topcon's Maestro care unit on 647 patients with diabetes. To identify retinal pathology, a Convolutional Neural Network (CNN) with dual-modal inputs (i.e. CFP and OCT images) was developed. We developed a novel alternate gradient descent algorithm to train the CNN, which allows for the use of uninterpretable CFP/OCT images (i.e. ungradable images that do not contain sufficient image biomarkers for the reviewer to conclude absence or presence of retinal pathology). Specifically, a 9:1 ratio to split the training and testing dataset was used for training and validating the CNN. Paired CFP/OCT inputs (obtained from a single eye of a patient) were grouped as retinal pathology negative (RPN; 924 images) in the absence of retinal pathology in both imaging modalities, or if one of the imaging modalities was uninterpretable and the other without retinal pathology. If any imaging modality exhibited referable retinal pathology, the corresponding CFP/OCT inputs were deemed retinal pathology positive (RPP; 224 images) if any imaging modality exhibited referable retinal pathology.
Our approach achieved 88.60% (95% confidence interval [CI] = 82.76% to 94.43%) accuracy in identifying pathology, along with the false negative rate (FNR) of 12.28% (95% CI = 6.26% to 18.31%), recall (sensitivity) of 87.72% (95% CI = 81.69% to 93.74%), specificity of 89.47% (95% CI = 83.84% to 95.11%), and area under the curve of receiver operating characteristic (AUC-ROC) was 92.74% (95% CI = 87.71% to 97.76%).
Our model can be successfully deployed in clinical practice to facilitate automated remote retinal pathology identification.
A fully automated tool for early diagnosis of retinal pathology might allow for earlier treatment and improved visual outcomes.
本研究旨在满足远程(例如远程眼科)获取的视网膜图像的完全自动化、基于学习的解释工具的需求,这些图像可能包括不完美(不可解释)的图像。
对 647 名糖尿病患者使用 Topcon 的 Maestro care 单元获得的 1148 份光学相干断层扫描(OCT)和眼底彩色摄影(CFP)视网膜图像进行回顾性研究。为了识别视网膜病变,我们开发了一种具有双模态输入(即 CFP 和 OCT 图像)的卷积神经网络(CNN)。我们开发了一种新的交替梯度下降算法来训练 CNN,该算法允许使用不可解释的 CFP/OCT 图像(即没有足够的图像生物标志物的图像,无法让审阅者得出是否存在视网膜病变的结论)。具体来说,使用 9:1 的比例将训练和测试数据集分开用于训练和验证 CNN。来自患者单眼的配对 CFP/OCT 输入,如果两种成像方式均无视网膜病变,或者如果一种成像方式不可解释而另一种没有视网膜病变,则将其分组为视网膜病变阴性(RPN;924 张图像)。如果任何一种成像方式显示出可参考的视网膜病变,则将相应的 CFP/OCT 输入视为视网膜病变阳性(RPP;224 张图像),如果任何一种成像方式显示出可参考的视网膜病变。
我们的方法在识别病变方面达到了 88.60%(95%置信区间 [CI] = 82.76%至 94.43%)的准确率,假阴性率(FNR)为 12.28%(95% CI = 6.26%至 18.31%),召回率(敏感性)为 87.72%(95% CI = 81.69%至 93.74%),特异性为 89.47%(95% CI = 83.84%至 95.11%),接收器工作特征(ROC)曲线下面积(AUC-ROC)为 92.74%(95% CI = 87.71%至 97.76%)。
我们的模型可以成功地部署在临床实践中,以促进远程自动视网膜病变识别。
这是一篇关于使用深度学习算法进行视网膜图像分析的研究报告。研究人员开发了一种基于卷积神经网络的工具,可以自动识别视网膜病变,并对其进行分类。该工具使用了来自多个医疗机构的视网膜图像数据集进行训练和验证,取得了较高的准确率和特异性。研究结果表明,这种自动化工具可以帮助医生快速、准确地诊断视网膜病变,提高诊断效率和准确性。