Department of Ophthalmology and Visual Sciences, The Ohio State University, Columbus, OH, 43210, USA.
Department of Biomedical Informatics, The Ohio State University, Columbus, OH, 43210, USA.
Sci Rep. 2024 Feb 24;14(1):4494. doi: 10.1038/s41598-024-55056-y.
Glaucoma is the leading cause of irreversible blindness worldwide. Often asymptomatic for years, this disease can progress significantly before patients become aware of the loss of visual function. Critical examination of the optic nerve through ophthalmoscopy or using fundus images is a crucial component of glaucoma detection before the onset of vision loss. The vertical cup-to-disc ratio (VCDR) is a key structural indicator for glaucoma, as thinning of the superior and inferior neuroretinal rim is a hallmark of the disease. However, manual assessment of fundus images is both time-consuming and subject to variability based on clinician expertise and interpretation. In this study, we develop a robust and accurate automated system employing deep learning (DL) techniques, specifically the YOLOv7 architecture, for the detection of optic disc and optic cup in fundus images and the subsequent calculation of VCDR. We also address the often-overlooked issue of adapting a DL model, initially trained on a specific population (e.g., European), for VCDR estimation in a different population. Our model was initially trained on ten publicly available datasets and subsequently fine-tuned on the REFUGE dataset, which comprises images collected from Chinese patients. The DL-derived VCDR displayed exceptional accuracy, achieving a Pearson correlation coefficient of 0.91 (P = 4.12 × 10) and a mean absolute error (MAE) of 0.0347 when compared to assessments by human experts. Our models also surpassed existing approaches on the REFUGE dataset, demonstrating higher Dice similarity coefficients and lower MAEs. Moreover, we developed an optimization approach capable of calibrating DL results for new populations. Our novel approaches for detecting optic discs and optic cups and calculating VCDR, offers clinicians a promising tool that significantly reduces manual workload in image assessment while improving both speed and accuracy. Most importantly, this automated method effectively differentiates between glaucoma and non-glaucoma cases, making it a valuable asset for glaucoma detection.
青光眼是全球导致不可逆性失明的主要原因。这种疾病常常多年无症状,在患者意识到视觉功能丧失之前,可能会显著进展。通过眼底镜检查或使用眼底图像对视神经进行临界检查,是在视力丧失发生之前检测青光眼的一个重要组成部分。垂直杯盘比(VCDR)是青光眼的一个关键结构指标,因为上、下神经视网膜边缘变薄是该疾病的一个标志。然而,眼底图像的手动评估既耗时,又受到临床医生专业知识和解释的可变性的影响。在这项研究中,我们开发了一个强大而准确的自动化系统,该系统采用深度学习(DL)技术,特别是 YOLOv7 架构,用于检测眼底图像中的视盘和视杯,并随后计算 VCDR。我们还解决了一个经常被忽视的问题,即调整最初在特定人群(例如欧洲人群)上训练的 DL 模型,以用于不同人群中的 VCDR 估计。我们的模型最初在十个公开数据集上进行训练,然后在 REFUGE 数据集上进行微调,该数据集包含从中国患者收集的图像。与人类专家的评估相比,DL 得出的 VCDR 显示出极高的准确性,达到了 0.91 的皮尔逊相关系数(P=4.12×10)和 0.0347 的平均绝对误差(MAE)。我们的模型在 REFUGE 数据集上也超过了现有方法,表现出更高的 Dice 相似系数和更低的 MAE。此外,我们开发了一种优化方法,能够为新人群校准 DL 结果。我们用于检测视盘和视杯并计算 VCDR 的新方法为临床医生提供了一个很有前途的工具,它大大减少了图像评估的手动工作量,同时提高了速度和准确性。最重要的是,这种自动化方法能够有效地区分青光眼和非青光眼病例,使其成为青光眼检测的宝贵资产。