Ding Jiaqi, Zhang Zehua, Tang Jijun, Guo Fei
School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China.
School of Computer Science and Engineering, Central South University, Changsha, China.
Front Bioeng Biotechnol. 2021 Aug 19;9:697915. doi: 10.3389/fbioe.2021.697915. eCollection 2021.
Changes in fundus blood vessels reflect the occurrence of eye diseases, and from this, we can explore other physical diseases that cause fundus lesions, such as diabetes and hypertension complication. However, the existing computational methods lack high efficiency and precision segmentation for the vascular ends and thin retina vessels. It is important to construct a reliable and quantitative automatic diagnostic method for improving the diagnosis efficiency. In this study, we propose a multichannel deep neural network for retina vessel segmentation. First, we apply U-net on original and thin (or thick) vessels for multi-objective optimization for purposively training thick and thin vessels. Then, we design a specific fusion mechanism for combining three kinds of prediction probability maps into a final binary segmentation map. Experiments show that our method can effectively improve the segmentation performances of thin blood vessels and vascular ends. It outperforms many current excellent vessel segmentation methods on three public datasets. In particular, it is pretty impressive that we achieve the best F1-score of 0.8247 on the DRIVE dataset and 0.8239 on the STARE dataset. The findings of this study have the potential for the application in an automated retinal image analysis, and it may provide a new, general, and high-performance computing framework for image segmentation.
眼底血管的变化反映了眼部疾病的发生,据此我们可以探究导致眼底病变的其他身体疾病,如糖尿病和高血压并发症。然而,现有的计算方法在血管末端和视网膜细血管的分割上缺乏高效性和精确性。构建一种可靠的定量自动诊断方法对于提高诊断效率很重要。在本研究中,我们提出了一种用于视网膜血管分割的多通道深度神经网络。首先,我们将U-net应用于原始血管和细(或粗)血管,进行多目标优化,以有针对性地训练粗血管和细血管。然后,我们设计了一种特定的融合机制,将三种预测概率图组合成最终的二值分割图。实验表明,我们的方法能够有效提高细血管和血管末端的分割性能。在三个公共数据集上,它优于许多当前优秀的血管分割方法。特别是,我们在DRIVE数据集上取得了0.8247的最佳F1分数,在STARE数据集上取得了0.8239的最佳F1分数,这给人留下了深刻的印象。本研究的结果具有在自动化视网膜图像分析中应用的潜力,并且可能为图像分割提供一个新的、通用的和高性能的计算框架。