SERI-NTU Advanced Ocular Engineering (STANCE), Singapore.
School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore.
Ann N Y Acad Sci. 2022 Sep;1515(1):237-248. doi: 10.1111/nyas.14844. Epub 2022 Jun 21.
To evaluate machine learning (ML) approaches for structure-function modeling to estimate visual field (VF) loss in glaucoma, models from different ML approaches were trained on optical coherence tomography thickness measurements to estimate global VF mean deviation (VF MD) and focal VF loss from 24-2 standard automated perimetry. The models were compared using mean absolute errors (MAEs). Baseline MAEs were obtained from the VF values and their means. Data of 832 eyes from 569 participants were included, with 537 Asian eyes for training, and 148 Asian and 111 Caucasian eyes set aside as the respective test sets. All ML models performed significantly better than baseline. Gradient-boosted trees (XGB) achieved the lowest MAE of 3.01 (95% CI: 2.57, 3.48) dB and 3.04 (95% CI: 2.59, 3.99) dB for VF MD estimation in the Asian and Caucasian test sets, although difference between models was not significant. In focal VF estimation, XGB achieved median MAEs of 4.44 [IQR 3.45-5.17] dB and 3.87 [IQR 3.64-4.22] dB across the 24-2 VF for the Asian and Caucasian test sets and was comparable to VF estimates from support vector regression (SVR) models. VF estimates from both XGB and SVR were significantly better than the other models. These results show that XGB and SVR could potentially be used for both global and focal structure-function modeling in glaucoma.
为了评估机器学习(ML)方法在评估青光眼视野(VF)损失中的结构-功能建模,从不同的 ML 方法中训练模型,使用光学相干断层扫描(OCT)厚度测量来估计全球 VF 平均偏差(VF MD)和从 24-2 标准自动视野计获得的焦点 VF 损失。通过平均绝对误差(MAE)来比较模型。基线 MAE 是从 VF 值及其平均值中获得的。纳入了 569 名参与者的 832 只眼的数据,其中 537 只为亚洲眼,用于训练,148 只为亚洲眼和 111 只为高加索眼分别作为各自的测试集。所有 ML 模型的性能均明显优于基线。梯度提升树(XGB)在亚洲和高加索测试集中,VF MD 估计的 MAE 分别达到 3.01(95%CI:2.57,3.48)dB 和 3.04(95%CI:2.59,3.99)dB,达到最低水平,尽管模型之间的差异不显著。在焦点 VF 估计中,XGB 在亚洲和高加索测试集中分别达到 24-2 VF 的中位数 MAE 为 4.44[IQR 3.45-5.17]dB 和 3.87[IQR 3.64-4.22]dB,与支持向量回归(SVR)模型的 VF 估计相当。XGB 和 SVR 的 VF 估计均明显优于其他模型。这些结果表明,XGB 和 SVR 可能可用于青光眼的全局和焦点结构-功能建模。