Suppr超能文献

PyGlaucoMetrics:一种基于堆叠权重的机器学习方法,用于使用视野数据检测青光眼。

PyGlaucoMetrics: A Stacked Weight-Based Machine Learning Approach for Glaucoma Detection Using Visual Field Data.

作者信息

Moradi Mousa, Hashemabad Saber Kazeminasab, Vu Daniel M, Soneru Allison R, Fujita Asahi, Wang Mengyu, Elze Tobias, Eslami Mohammad, Zebardast Nazlee

机构信息

Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA.

Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA.

出版信息

Medicina (Kaunas). 2025 Mar 20;61(3):541. doi: 10.3390/medicina61030541.

Abstract

: Glaucoma (GL) classification is crucial for early diagnosis and treatment, yet relying solely on stand-alone models or International Classification of Diseases (ICD) codes is insufficient due to limited predictive power and inconsistencies in clinical labeling. This study aims to improve GL classification using stacked weight-based machine learning models. : We analyzed a subset of 33,636 participants (58% female) with 340,444 visual fields (VFs) from the Mass Eye and Ear (MEE) dataset. Five clinically relevant GL detection models (LoGTS, UKGTS, Kang, HAP2_part1, and Foster) were selected to serve as base models. Two multi-layer perceptron (MLP) models were trained using 52 total deviation (TD) and pattern deviation (PD) values from Humphrey field analyzer (HFA) 24-2 VF tests, along with four clinical variables (age, gender, follow-up time, and race) to extract model weights. These weights were then utilized to train three meta-learners, including logistic regression (LR), extreme gradient boosting (XGB), and MLP, to classify cases as GL or non-GL. : The MLP meta-learner achieved the highest performance, with an accuracy of 96.43%, an F-score of 96.01%, and an AUC of 97.96%, while also demonstrating the lowest prediction uncertainty (0.08 ± 0.13). XGB followed with 92.86% accuracy, a 92.31% F-score, and a 96.10% AUC. LR had the lowest performance, with 89.29% accuracy, an 86.96% F-score, and a 94.81% AUC, as well as the highest uncertainty (0.58 ± 0.07). Permutation importance analysis revealed that the superior temporal sector was the most influential VF feature, with importance scores of 0.08 in Kang's and 0.04 in HAP2_part1 models. Among clinical variables, age was the strongest contributor (score = 0.3). : The meta-learner outperformed stand-alone models in GL classification, achieving an accuracy improvement of 8.92% over the best-performing stand-alone model (LoGTS with 87.51%), offering a valuable tool for automated glaucoma detection.

摘要

青光眼(GL)分类对于早期诊断和治疗至关重要,然而,仅依靠独立模型或国际疾病分类(ICD)编码是不够的,因为其预测能力有限且临床标注存在不一致性。本研究旨在使用基于权重堆叠的机器学习模型改善青光眼分类。

我们分析了来自麻省眼耳医院(MEE)数据集的33636名参与者(58%为女性)的子集,这些参与者共有340444次视野(VF)检查数据。选择了五个与临床相关的青光眼检测模型(LoGTS、UKGTS、Kang、HAP2_part1和Foster)作为基础模型。使用来自Humphrey视野分析仪(HFA)24 - 2 VF测试的52个总偏差(TD)和模式偏差(PD)值,以及四个临床变量(年龄、性别、随访时间和种族)训练了两个多层感知器(MLP)模型,以提取模型权重。然后利用这些权重训练三个元学习器,包括逻辑回归(LR)、极端梯度提升(XGB)和MLP,将病例分类为青光眼或非青光眼。

MLP元学习器表现最佳,准确率为96.43%,F值为96.01%,曲线下面积(AUC)为97.96%,同时预测不确定性最低(0.08±0.13)。XGB次之,准确率为92.86%,F值为92.31%,AUC为96.10%。LR表现最差,准确率为89.29%,F值为86.96%,AUC为94.81%,且不确定性最高(0.58±0.07)。排列重要性分析表明,颞上象限是最具影响力的视野特征,在Kang模型中的重要性得分是0.08,在HAP2_part1模型中是0.04。在临床变量中,年龄的贡献最大(得分 = 0.3)。

元学习器在青光眼分类方面优于独立模型,比表现最佳的独立模型(LoGTS,准确率87.51%)的准确率提高了8.92%,为青光眼自动检测提供了一个有价值的工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/53bb/11944261/f78a85da7568/medicina-61-00541-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验