From the Paul G. Allen School of Computer Science and Engineering, Seattle, Washington, USA (PM, S-IL, MB).
Department of Ophthalmology, Seattle, Washington, USA (CAP, JCW, MRB, PPC, KDB, AYL).
Am J Ophthalmol. 2021 Nov;231:154-169. doi: 10.1016/j.ajo.2021.04.021. Epub 2021 May 2.
To develop a multimodal model to automate glaucoma detection DESIGN: Development of a machine-learning glaucoma detection model METHODS: We selected a study cohort from the UK Biobank data set with 1193 eyes of 863 healthy subjects and 1283 eyes of 771 subjects with glaucoma. We trained a multimodal model that combines multiple deep neural nets, trained on macular optical coherence tomography volumes and color fundus photographs, with demographic and clinical data. We performed an interpretability analysis to identify features the model relied on to detect glaucoma. We determined the importance of different features in detecting glaucoma using interpretable machine learning methods. We also evaluated the model on subjects who did not have a diagnosis of glaucoma on the day of imaging but were later diagnosed (progress-to-glaucoma [PTG]).
Results show that a multimodal model that combines imaging with demographic and clinical features is highly accurate (area under the curve 0.97). Interpretation of this model highlights biological features known to be related to the disease, such as age, intraocular pressure, and optic disc morphology. Our model also points to previously unknown or disputed features, such as pulmonary function and retinal outer layers. Accurate prediction in PTG highlights variables that change with progression to glaucoma-age and pulmonary function.
The accuracy of our model suggests distinct sources of information in each imaging modality and in the different clinical and demographic variables. Interpretable machine learning methods elucidate subject-level prediction and help uncover the factors that lead to accurate predictions, pointing to potential disease mechanisms or variables related to the disease.
开发一种多模态模型,实现青光眼的自动检测
开发一种机器学习青光眼检测模型
我们从英国生物库数据集选择了一个研究队列,其中包括 863 名健康受试者的 1193 只眼和 771 名青光眼受试者的 1283 只眼。我们训练了一种多模态模型,该模型结合了多个深度神经网络,这些网络是基于黄斑光学相干断层扫描体积和彩色眼底照片训练的,同时还结合了人口统计学和临床数据。我们进行了可解释性分析,以确定模型用于检测青光眼所依赖的特征。我们使用可解释机器学习方法确定不同特征在检测青光眼方面的重要性。我们还评估了该模型在没有青光眼诊断但后来被诊断为青光眼(进展到青光眼 [PTG])的受试者中的表现。
结果表明,一种结合成像与人口统计学和临床特征的多模态模型具有很高的准确性(曲线下面积 0.97)。对该模型的解释突出了与疾病相关的已知生物学特征,例如年龄、眼内压和视盘形态。我们的模型还指出了以前未知或有争议的特征,例如肺功能和视网膜外层。在 PTG 中的准确预测突出了随青光眼进展而改变的变量——年龄和肺功能。
我们的模型的准确性表明,每种成像方式以及不同的临床和人口统计学变量都有独特的信息来源。可解释的机器学习方法阐明了个体水平的预测,并有助于揭示导致准确预测的因素,指出潜在的疾病机制或与疾病相关的变量。