Thanapaisal Sukhumal, Uttakit Passawut, Ittharat Worapon, Suvannachart Pukkapol, Supasai Pawasoot, Polpinit Pattarawit, Sirikarn Prapassara, Hanpinitsak Panawit
Department of Ophthalmology, Faculty of Medicine, Khon Kaen University, Khon Kaen, 40002, Thailand.
KKU Glaucoma Center of Excellence, Department of Ophthalmology, Faculty of Medicine, Khon Kaen University, Khon Kaen, Thailand.
Sci Rep. 2025 Jul 18;15(1):26151. doi: 10.1038/s41598-025-11697-1.
This study evaluates the performance of a machine learning model in classifying glaucoma severity using color fundus photographs. Glaucoma severity grading was based on the Hodapp-Parrish-Anderson (HPA) criteria incorporating the mean deviation value, defective points in the pattern deviation probability map, and defect proximity to the fixation point. The dataset of 2,940 fundus photographs from 1,789 patients was matched with visual field tests and equally classified into three classes: normal, mild-moderate, and severe glaucoma stages. The EfficientNetB7, a convolutional neural network model, was trained on these images using transfer learning and fine-tuning techniques. The model achieved an overall accuracy of 0.871 (95% CI, 0.822-0.919). For normal, mild-moderate, and severe classes, the area under the curve (AUC) values were 0.988, 0.932, and 0.963; sensitivity 0.903, 0.823, and 0.887; and specificity 0.960, 0.911, and 0.936, respectively. Analysis of the confusion matrix revealed the impact of structural-functional relationships in glaucoma on model performance. In conclusion, the EfficientNetB7 demonstrated high accuracy in classifying glaucoma severity based on the HPA criteria using fundus photographs, offering potential for clinical application in glaucoma diagnosis and management.
本研究评估了一种机器学习模型在使用彩色眼底照片对青光眼严重程度进行分类方面的性能。青光眼严重程度分级基于霍达普 - 帕里什 - 安德森(HPA)标准,该标准纳入了平均偏差值、模式偏差概率图中的缺陷点以及缺陷与注视点的距离。来自1789名患者的2940张眼底照片数据集与视野测试进行了匹配,并被等分为三类:正常、轻度 - 中度和重度青光眼阶段。卷积神经网络模型EfficientNetB7使用迁移学习和微调技术在这些图像上进行了训练。该模型的总体准确率为0.871(95%置信区间,0.822 - 0.919)。对于正常、轻度 - 中度和重度类别,曲线下面积(AUC)值分别为0.988、0.932和0.963;灵敏度分别为0.903、0.823和0.887;特异性分别为0.960、0.911和0.936。混淆矩阵分析揭示了青光眼结构 - 功能关系对模型性能的影响。总之,EfficientNetB7在基于HPA标准使用眼底照片对青光眼严重程度进行分类方面表现出高准确率,为青光眼诊断和管理的临床应用提供了潜力。