Suppr超能文献

通过判别特征学习和细血管增强改进用于视网膜血管分割的密集条件随机场

Improving dense conditional random field for retinal vessel segmentation by discriminative feature learning and thin-vessel enhancement.

作者信息

Zhou Lei, Yu Qi, Xu Xun, Gu Yun, Yang Jie

机构信息

Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, SEIEE Building 2-427, No. 800, Dongchuan Road, Minhang District, Shanghai, 200240 China.

Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, China.

出版信息

Comput Methods Programs Biomed. 2017 Sep;148:13-25. doi: 10.1016/j.cmpb.2017.06.016. Epub 2017 Jun 24.

Abstract

BACKGROUND AND OBJECTIVES

As retinal vessels in color fundus images are thin and elongated structures, standard pairwise based random fields, which always suffer the "shrinking bias" problem, are not competent for such segmentation task. Recently, a dense conditional random field (CRF) model has been successfully used in retinal vessel segmentation. Its corresponding energy function is formulated as a linear combination of several unary features and a pairwise term. However, the hand-crafted unary features can be suboptimal in terms of linear models. Here we propose to learn discriminative unary features and enhance thin vessels for pairwise potentials to further improve the segmentation performance.

METHODS

Our proposed method comprises four main steps: firstly, image preprocessing is applied to eliminate the strong edges around the field of view (FOV) and normalize the luminosity and contrast inside FOV; secondly, a convolutional neural network (CNN) is properly trained to generate discriminative features for linear models; thirdly, a combo of filters are applied to enhance thin vessels, reducing the intensity difference between thin and wide vessels; fourthly, by taking the discriminative features for unary potentials and the thin-vessel enhanced image for pairwise potentials, we adopt the dense CRF model to achieve the final retinal vessel segmentation. The segmentation performance is evaluated on four public datasets (i.e. DRIVE, STARE, CHASEDB1 and HRF).

RESULTS

Experimental results show that our proposed method improves the performance of the dense CRF model and outperforms other methods when evaluated in terms of F1-score, Matthews correlation coefficient (MCC) and G-mean, three effective metrics for the evaluation of imbalanced binary classification. Specifically, the F1-score, MCC and G-mean are 0.7942, 0.7656, 0.8835 for the DRIVE dataset respectively; 0.8017, 0.7830, 0.8859 for STARE respectively; 0.7644, 0.7398, 0.8579 for CHASEDB1 respectively; and 0.7627, 0.7402, 0.8812 for HRF respectively.

CONCLUSIONS

The discriminative features learned in CNNs are more effective than hand-crafted ones. Our proposed method performs well in retinal vessel segmentation. The architecture of our method is trainable and can be integrated into computer-aided diagnostic (CAD) systems in the future.

摘要

背景与目的

由于彩色眼底图像中的视网膜血管是细长结构,基于标准成对随机场的方法总会遇到“收缩偏差”问题,不适用于此类分割任务。近来,一种密集条件随机场(CRF)模型已成功用于视网膜血管分割。其相应的能量函数被表述为几个一元特征和一个成对项的线性组合。然而,就线性模型而言,手工制作的一元特征可能并非最优。在此,我们提议学习有判别力的一元特征,并增强成对势中的细血管,以进一步提高分割性能。

方法

我们提出的方法包括四个主要步骤:首先,进行图像预处理以消除视野(FOV)周围的强边缘,并归一化FOV内的亮度和对比度;其次,适当训练卷积神经网络(CNN)以生成用于线性模型的有判别力的特征;第三,应用一组滤波器来增强细血管,减少细血管和宽血管之间的强度差异;第四,将有判别力的特征用于一元势,将细血管增强后的图像用于成对势,我们采用密集CRF模型来实现最终的视网膜血管分割。在四个公共数据集(即DRIVE、STARE、CHASEDB1和HRF)上评估分割性能。

结果

实验结果表明,我们提出的方法提高了密集CRF模型的性能,并且在F1分数、马修斯相关系数(MCC)和G均值这三个评估不平衡二元分类的有效指标方面进行评估时,优于其他方法。具体而言,DRIVE数据集对应的F1分数、MCC和G均值分别为0.7942、0.7656、0.8835;STARE数据集对应的分别为0.8017、0.7830、0.8859;CHASEDB1数据集对应的分别为0.7644、0.7398、0.8579;HRF数据集对应的分别为0.7627、0.7402、0.8812。

结论

在卷积神经网络中学习到的有判别力的特征比手工制作的特征更有效。我们提出的方法在视网膜血管分割中表现良好。我们方法的架构是可训练的,并且未来可集成到计算机辅助诊断(CAD)系统中。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验