Suppr超能文献

从单眼眼底图像中提取视盘和视杯的联合边界。

Joint optic disc and cup boundary extraction from monocular fundus images.

作者信息

Chakravarty Arunava, Sivaswamy Jayanthi

机构信息

Centre for Visual Information Technology, International Institute of Information Technology Hyderabad, 500032, India.

出版信息

Comput Methods Programs Biomed. 2017 Aug;147:51-61. doi: 10.1016/j.cmpb.2017.06.004. Epub 2017 Jun 23.

Abstract

BACKGROUND AND OBJECTIVE

Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images.

METHODS

We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs.

RESULTS

The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2.

CONCLUSIONS

We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable.

摘要

背景与目的

从单眼彩色眼底图像中准确分割视盘和视杯在青光眼的筛查和诊断中起着重要作用。尽管视杯的特征是从视盘边界开始深度下降,但由于彩色眼底图像中缺乏明确的深度信息,大多数现有方法分别分割这两个结构,并且仅依赖基于颜色和血管扭结的线索。

方法

我们提出了一种新颖的基于边界的条件随机场公式,该公式在单个优化步骤中提取视盘和视杯边界。除了颜色梯度外,该方法还显式地对深度进行建模,该深度是使用在一组图像 - 深度图(源自光学相干断层扫描)对上训练的耦合稀疏字典从眼底图像本身估计的。

结果

估计深度与地面真值的相关系数达到0.80。所提出的分割方法在五个公共数据集上优于几种先进方法。在三个数据集上视盘分割的平均骰子系数在0.87 - 0.97范围内,在DRISHTI - GS1测试集上视杯分割的平均骰子系数为0.83。该方法在RIM - ONE v2上进行五折交叉验证时,平均AUC为0.85,具有良好的青光眼分类性能。

结论

我们提出了一种通过对两个结构之间的深度下降进行建模来联合分割视盘和视杯边界的方法。由于我们的方法在测试期间每只眼睛只需要一张眼底图像,因此它可用于无法获得昂贵的3D成像的青光眼大规模筛查。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验