Vision, Imaging and Performance (VIP) Laboratory, Duke Eye Center and Department of Ophthalmology, Duke University, Durham, North Carolina, USA.
Vision, Imaging and Performance (VIP) Laboratory, Duke Eye Center and Department of Ophthalmology, Duke University, Durham, North Carolina, USA.
Am J Ophthalmol. 2019 May;201:9-18. doi: 10.1016/j.ajo.2019.01.011. Epub 2019 Jan 26.
To train a deep learning (DL) algorithm that quantifies glaucomatous neuroretinal damage on fundus photographs using the minimum rim width relative to Bruch membrane opening (BMO-MRW) from spectral-domain optical coherence tomography (SDOCT).
Cross-sectional study.
A total of 9282 pairs of optic disc photographs and SDOCT optic nerve head scans from 927 eyes of 490 subjects were randomly divided into the validation plus training (80%) and test sets (20%). A DL convolutional neural network was trained to predict the SDOCT BMO-MRW global and sector values when evaluating optic disc photographs. The predictions of the DL network were compared to the actual SDOCT measurements. The area under the receiver operating curve (AUC) was used to evaluate the ability of the network to discriminate glaucomatous visual field loss from normal eyes.
The DL predictions of global BMO-MRW from all optic disc photographs in the test set (mean ± standard deviation [SD]: 228.8 ± 63.1 μm) were highly correlated with the observed values from SDOCT (mean ± SD: 226.0 ± 73.8 μm) (Pearson's r = 0.88; R = 77%; P < .001), with mean absolute error of the predictions of 27.8 μm. The AUCs for discriminating glaucomatous from healthy eyes with the DL predictions and actual SDOCT global BMO-MRW measurements were 0.945 (95% confidence interval [CI]: 0.874-0.980) and 0.933 (95% CI: 0.856-0.975), respectively (P = .587).
A DL network can be trained to quantify the amount of neuroretinal damage on optic disc photographs using SDOCT BMO-MRW as a reference. This algorithm showed high accuracy for glaucoma detection, and may potentially eliminate the need for human gradings of disc photographs.
利用来自光谱域光学相干断层扫描(SDOCT)的最小视网膜神经纤维层宽度相对于脉络膜开口(BMO-MRW)来训练一种量化眼底照片中青光眼性神经视网膜损伤的深度学习(DL)算法。
横断面研究。
将来自 490 名受试者 927 只眼中的 9282 对视盘照片和 SDOCT 视神经头部扫描随机分为验证加训练(80%)和测试集(20%)。训练一个 DL 卷积神经网络,以预测评估视盘照片时的 SDOCT BMO-MRW 全局和扇区值。将 DL 网络的预测与实际 SDOCT 测量值进行比较。使用受试者工作特征曲线下的面积(AUC)来评估网络区分青光眼性视野损失与正常眼的能力。
测试集中所有视盘照片的全局 BMO-MRW 的 DL 预测值(平均值±标准差[SD]:228.8±63.1μm)与 SDOCT 的实际测量值(平均值±SD:226.0±73.8μm)高度相关(Pearson r=0.88;R=77%;P<0.001),预测值的平均绝对误差为 27.8μm。使用 DL 预测和实际 SDOCT 全局 BMO-MRW 测量值来区分青光眼与健康眼的 AUC 分别为 0.945(95%置信区间[CI]:0.874-0.980)和 0.933(95% CI:0.856-0.975)(P=0.587)。
可以训练一个 DL 网络使用 SDOCT BMO-MRW 作为参考来量化视盘照片上的神经视网膜损伤量。该算法对青光眼检测具有很高的准确性,并且可能潜在地消除对视盘照片的人工分级的需求。