Suppr超能文献

用于青光眼筛查的高度准确且精确的杯盘比自动定量分析

Highly Accurate and Precise Automated Cup-to-Disc Ratio Quantification for Glaucoma Screening.

作者信息

Chaurasia Abadh K, Greatbatch Connor J, Han Xikun, Gharahkhani Puya, Mackey David A, MacGregor Stuart, Craig Jamie E, Hewitt Alex W

机构信息

Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia.

QIMR Berghofer Medical Research Institute, Brisbane, Australia.

出版信息

Ophthalmol Sci. 2024 Apr 27;4(5):100540. doi: 10.1016/j.xops.2024.100540. eCollection 2024 Sep-Oct.

Abstract

OBJECTIVE

An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning-based algorithm to automatically determine the CDR from fundus images.

DESIGN

Algorithm development for estimating CDR using fundus data from a population-based observational study.

PARTICIPANTS

A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS.

METHODS

FastAI and PyTorch libraries were used to train a convolutional neural network-based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS.

MAIN OUTCOME MEASURES

The area under the receiver operating characteristic curve and coefficient of determination.

RESULTS

Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459-0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048-0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543-0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively.

CONCLUSIONS

Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence-derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters.

FINANCIAL DISCLOSURES

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

摘要

目的

杯盘比(CDR)增大是青光眼性视神经病变的一个标志。与自动化方法相比,手动评估CDR可能不太准确且更耗时。在此,我们试图开发并验证一种基于深度学习的算法,以从眼底图像中自动确定CDR。

设计

使用基于人群的观察性研究中的眼底数据进行算法开发,以估计CDR。

参与者

来自英国生物银行(UKBB)、Drishti_GS和EyePACS的总共181768张眼底图像。

方法

使用FastAI和PyTorch库在UKBB的眼底图像上训练基于卷积神经网络的模型。构建模型以确定图像的可分级性(分类分析)以及估计CDR(回归分析)。然后使用来自EyePACS和Drishti_GS的多民族数据集对表现最佳的模型进行验证,以用于青光眼筛查。

主要观察指标

受试者工作特征曲线下面积和决定系数。

结果

我们的可分级性模型vgg19_batch归一化(bn)在16045张图像的验证集上达到了97.13%的准确率,精确率为99.26%,受试者工作特征曲线下面积为96.56%。使用回归分析,我们表现最佳的模型(在vgg19_bn架构上训练)在用于确定CDR的12183张图像的验证集上,决定系数为0.8514(95%置信区间[CI]:0.8459 - 0.8568),均方误差为0.0050(95% CI:0.0048 - 0.0051),平均绝对误差为0.0551(95% CI:0.0543 - 0.0559)。使用0.2的容差将回归点转换为20个类别的分类指标;分类指标的准确率达到了99.20%。然后使用EyePACS数据集(98172名健康人,3270名青光眼患者)对该模型进行青光眼分类的外部验证,准确率、敏感性和特异性分别为82.49%、72.02%和82.83%。

结论

我们的模型在确定图像可分级性和估计CDR方面很精确。尽管我们通过人工智能得出的CDR估计值具有很高的准确性,但青光眼筛查的CDR阈值将因其他临床参数而异。

财务披露

在本文末尾的脚注和披露中可能会找到专有或商业披露信息。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c29a/11268341/980ce4bcc06c/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验