Bowd Christopher, Belghith Akram, Christopher Mark, Araie Makoto, Iwase Aiko, Tomita Goji, Ohno-Matsui Kyoko, Saito Hitomi, Murata Hiroshi, Kikawa Tsutomu, Sugiyama Kazuhisa, Higashide Tomomi, Miki Atsuya, Nakazawa Toru, Aihara Makoto, Kim Tae-Woo, Leung Christopher Kai Shun, Weinreb Robert N, Zangwill Linda M
Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California (UC) San Diego, La Jolla, CA, United States.
Kanto Central Hospital of the Mutual Aid Association of Public School Teachers, Tokyo, Japan.
Front Ophthalmol (Lausanne). 2025 Aug 4;5:1624015. doi: 10.3389/fopht.2025.1624015. eCollection 2025.
To evaluate the diagnostic accuracy of a deep learning autoencoder-based model utilizing regions of interest (ROI) from optical coherence tomography (OCT) texture enface images for detecting glaucoma in myopic eyes.
This cross-sectional study included a total of 453 eyes from 315 participants from the multi-center "Swept-Source OCT (SS-OCT) Myopia and Glaucoma Study", composed of 268 eyes from 168 healthy individuals and 185 eyes from 147 glaucomatous individuals. All participants underwent swept-source optical coherence tomography (SS-OCT) imaging, from which texture enface images were constructed and analyzed. The study compared four methods: (1) global RNFL thickness, (2) texture enface image, (3) a single autoencoder model trained only on healthy eyes, and (4) a dual autoencoder model trained on both healthy and glaucomatous eyes. Diagnostic accuracy was assessed using the area under the receiver operating curves (AUROC) and precision recall curves (AUPRC).
The dual autoencoder model achieved the highest AUROC (95% CI) (0.92 [0.88, 0.95]), significantly outperforming the single autoencoder model trained only on healthy eyes (0.86 [0.83, 0.88], p = 0.01), the global RNFL thickness model (0.84 [0.80, 0.86], p = 0.003), and the texture enface model (0.83 [0.79, 0.85], p = 0.005). Using AUPRC (95% CI), the dual autoencoder model (0.86 [0.83, 0.89]) also outperformed the single autoencoder model trained only on healthy eyes (0.80 [0.78, 0.82], p = 0.02), the global RNFL thickness model (0.74 [0.70, 0.76], p = 0.001), and the texture enface model (0.71 [0.68, 0.73], p<0.001). No significant difference was observed between the global RNFL thickness measurement and the texture enface measurement (p = 0.47).
The dual autoencoder model, which integrates reconstruction errors from both healthy and glaucomatous training data, demonstrated superior diagnostic accuracy compared to the single autoencoder model, global RNFL thickness and texture enface-based approaches. These findings suggest that deep learning models leveraging ROI-based reconstruction error from texture enface images may enhance glaucoma classification in myopic eyes, providing a robust alternative to conventional structural thickness metrics.
评估一种基于深度学习自动编码器的模型的诊断准确性,该模型利用光学相干断层扫描(OCT)纹理正面图像中的感兴趣区域(ROI)来检测近视眼青光眼。
这项横断面研究共纳入了来自多中心“扫频源OCT(SS - OCT)近视与青光眼研究”的315名参与者的453只眼睛,其中包括168名健康个体的268只眼睛和147名青光眼患者的185只眼睛。所有参与者均接受了扫频源光学相干断层扫描(SS - OCT)成像,并据此构建和分析纹理正面图像。该研究比较了四种方法:(1)整体视网膜神经纤维层(RNFL)厚度,(2)纹理正面图像,(3)仅在健康眼睛上训练的单一自动编码器模型,以及(4)在健康和青光眼眼睛上都进行训练的双自动编码器模型。使用受试者工作特征曲线下面积(AUROC)和精确召回率曲线下面积(AUPRC)评估诊断准确性。
双自动编码器模型的AUROC(95%置信区间)最高(0.92 [0.88, 0.95]),显著优于仅在健康眼睛上训练的单一自动编码器模型(0.86 [0.83, 0.88],p = 0.01)、整体RNFL厚度模型(0.84 [0.80, 0.86],p = 0.003)和纹理正面模型(0.83 [0.79, 0.85],p = 0.005)。使用AUPRC(95%置信区间)时,双自动编码器模型(0.86 [0.83, 0.89])也优于仅在健康眼睛上训练的单一自动编码器模型(0.80 [0.78, 0.82],p = 0.02)、整体RNFL厚度模型(0.74 [0.70, 0.76],p = 0.001)和纹理正面模型(0.71 [0.68, 0.73],p<0.001)。整体RNFL厚度测量与纹理正面测量之间未观察到显著差异(p = 0.47)。
双自动编码器模型整合了来自健康和青光眼训练数据的重建误差,与单一自动编码器模型、整体RNFL厚度和基于纹理正面的方法相比,具有更高的诊断准确性。这些发现表明,利用纹理正面图像中基于ROI的重建误差的深度学习模型可能会提高近视眼青光眼的分类准确性,为传统的结构厚度指标提供了一种可靠的替代方法。