Suppr超能文献

基于深度学习的眼前节光学相干断层扫描参数定量分析

Deep Learning-based Quantification of Anterior Segment OCT Parameters.

作者信息

Soh Zhi Da, Tan Mingrui, Nongpiur Monisha Esther, Yu Marco, Qian Chaoxu, Tham Yih Chung, Koh Victor, Aung Tin, Xu Xinxing, Liu Yong, Cheng Ching-Yu

机构信息

Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.

Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.

出版信息

Ophthalmol Sci. 2023 Jul 3;4(1):100360. doi: 10.1016/j.xops.2023.100360. eCollection 2024 Jan-Feb.

Abstract

OBJECTIVE

To develop and validate a deep learning algorithm that could automate the annotation of scleral spur (SS) and segmentation of anterior chamber (AC) structures for measurements of AC, iris, and angle width parameters in anterior segment OCT (ASOCT) scans.

DESIGN

Cross-sectional study.

SUBJECTS

Data from 2 population-based studies (i.e., the Singapore Chinese Eye Study and Singapore Malay Eye Study) and 1 clinical study on angle-closure disease were included in algorithm development. A separate clinical study on angle-closure disease was used for external validation.

METHOD

Image contrast of ASOCT scans were first enhanced with CycleGAN. We utilized a heat map regression approach with coarse-to-fine framework for SS annotation. Then, an ensemble network of U-Net, full resolution residual network, and full resolution U-Net was used for structure segmentation. Measurements obtained from predicted SSs and structure segmentation were measured and compared with measurements obtained from manual SS annotation and structure segmentation (i.e., ground truth).

MAIN OUTCOME MEASURES

We measured Euclidean distance and intraclass correlation coefficients (ICC) to evaluate SS annotation and Dice similarity coefficient for structure segmentation. The ICC, Bland-Altman plot, and repeatability coefficient were used to evaluate agreement and precision of measurements.

RESULTS

For SS annotation, our algorithm achieved a Euclidean distance of 124.7 μm, ICC ≥ 0.95, and a 3.3% error rate. For structure segmentation, we obtained Dice similarity coefficient ≥ 0.91 for cornea, iris, and AC segmentation. For angle width measurements, ≥ 95% of data points were within the 95% limits-of-agreement in Bland-Altman plot with insignificant systematic bias (all > 0.12). The ICC ranged from 0.71-0.87 for angle width measurements, 0.54 for IT750, 0.83-0.85 for other iris measurements, and 0.89-0.99 for AC measurements. Using the same SS coordinates from a human expert, measurements obtained from our algorithm were generally less variable than measurements obtained from a semiautomated angle assessment program.

CONCLUSION

We developed a deep learning algorithm that could automate SS annotation and structure segmentation in ASOCT scans like human experts, in both open-angle and angle-closure eyes. This algorithm reduces the time needed and subjectivity in obtaining ASOCT measurements.

FINANCIAL DISCLOSURES

The author(s) have no proprietary or commercial interest in any materials discussed in this article.

摘要

目的

开发并验证一种深度学习算法,该算法能够自动标注巩膜突(SS)并分割前房(AC)结构,以测量眼前节光学相干断层扫描(ASOCT)图像中的前房、虹膜和房角宽度参数。

设计

横断面研究。

研究对象

算法开发纳入了来自2项基于人群的研究(即新加坡华人眼研究和新加坡马来人眼研究)以及1项关于闭角型青光眼的临床研究的数据。另一项关于闭角型青光眼的临床研究用于外部验证。

方法

首先使用CycleGAN增强ASOCT扫描的图像对比度。我们采用了一种从粗到细框架的热图回归方法进行SS标注。然后,使用U-Net、全分辨率残差网络和全分辨率U-Net的集成网络进行结构分割。将预测的SS和结构分割得到的测量值与手动SS标注和结构分割(即真值)得到的测量值进行比较。

主要观察指标

我们测量了欧几里得距离和组内相关系数(ICC)以评估SS标注,并测量了结构分割的Dice相似系数。使用ICC、Bland-Altman图和重复性系数来评估测量的一致性和精密度。

结果

对于SS标注,我们的算法实现了124.7μm的欧几里得距离、ICC≥0.95以及3.3%的错误率。对于结构分割,我们获得了角膜、虹膜和AC分割的Dice相似系数≥0.91。对于房角宽度测量,在Bland-Altman图中≥95%的数据点在95%一致性界限内,且系统偏差不显著(均>0.12)。房角宽度测量的ICC范围为0.71-0.87,IT750为0.54,其他虹膜测量为0.83-0.85,AC测量为0.89-0.99。使用与人类专家相同的SS坐标,我们算法得到的测量值通常比半自动房角评估程序得到的测量值变异性更小。

结论

我们开发了一种深度学习算法,该算法能够像人类专家一样在开角和闭角眼中自动进行ASOCT扫描中的SS标注和结构分割。该算法减少了获取ASOCT测量所需的时间和主观性。

财务披露

作者对本文讨论的任何材料均无所有权或商业利益。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验