School of Information Science and Technology, Southwest Jiaotong University, 611756, Chengdu, People's Republic of China.
Phys Med Biol. 2023 Sep 27;68(19). doi: 10.1088/1361-6560/acefa0.
Automatic segmentation of fundus vessels has the potential to enhance the judgment ability of intelligent disease diagnosis systems. Even though various methods have been proposed, it is still a demanding task to accurately segment the fundus vessels. The purpose of our study is to develop a robust and effective method to segment the vessels in human color retinal fundus images.We present a novel multi-level spatial-temporal and attentional information deep fusion network for the segmentation of retinal vessels, called MSAFNet, which enhances segmentation performance and robustness. Our method utilizes the multi-level spatial-temporal encoding module to obtain spatial-temporal information and the Self-Attention module to capture feature correlations in different levels of our network. Based on the encoder and decoder structure, we combine these features to get the final segmentation results.Through abundant experiments on four public datasets, our method achieves preferable performance compared with other SOTA retinal vessel segmentation methods. Our Accuracy and Area Under Curve achieve the highest scores of 96.96%, 96.57%, 96.48% and 98.78%, 98.54%, 98.27% on DRIVE, CHASE_DB1, and HRF datasets. Our Specificity achieves the highest score of 98.58% and 99.08% on DRIVE and STARE datasets.The experimental results demonstrate that our method has strong learning and representation capabilities and can accurately detect retinal blood vessels, thereby serving as a potential tool for assisting in diagnosis.
自动分割眼底血管具有增强智能疾病诊断系统判断能力的潜力。尽管已经提出了各种方法,但准确分割眼底血管仍然是一项具有挑战性的任务。我们的研究目的是开发一种强大而有效的方法来分割人眼彩色视网膜图像中的血管。我们提出了一种新颖的用于视网膜血管分割的多级时空和注意力信息深度融合网络,称为 MSAFNet,它增强了分割性能和鲁棒性。我们的方法利用多级时空编码模块获取时空信息,以及自注意模块捕获我们网络中不同层次的特征相关性。基于编码器和解码器结构,我们将这些特征结合起来以获得最终的分割结果。通过在四个公共数据集上进行大量实验,与其他 SOTA 视网膜血管分割方法相比,我们的方法取得了较好的性能。我们的准确率和曲线下面积在 DRIVE、CHASE_DB1 和 HRF 数据集上的得分最高,分别为 96.96%、96.57%、96.48%和 98.78%、98.54%、98.27%。我们的特异性在 DRIVE 和 STARE 数据集上的得分最高,分别为 98.58%和 99.08%。实验结果表明,我们的方法具有较强的学习和表示能力,可以准确地检测视网膜血管,从而成为辅助诊断的潜在工具。