Bhati Amit, Jain Samir, Gour Neha, Khanna Pritee, Ojha Aparajita, Werghi Naoufel
PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India.
Khalifa University, Abu Dhabi, United Arab Emirates.
Comput Biol Med. 2025 Mar;186:109592. doi: 10.1016/j.compbiomed.2024.109592. Epub 2024 Dec 28.
Accurate extraction of retinal vascular components is vital in diagnosing and treating retinal diseases. Achieving precise segmentation of retinal blood vessels is challenging due to their complex structure and overlapping vessels with other anatomical features. Existing deep neural networks often suffer from false positives at vessel branches or missing fragile vessel patterns. Also, deployment of the existing models in resource-constrained environments is challenging due to their computational complexity. An attention-based and computationally efficient architecture is proposed in this work to bridge this gap while enabling improved segmentation of retinal vascular structures.
The proposed dynamic statistical attention-based lightweight model for retinal vessel segmentation (DyStA-RetNet) employs a shallow CNN-based encoder-decoder architecture. One branch of the decoder utilizes a partial decoder connecting encoder layers with decoder layers to allow the transfer of high-level semantic information, whereas the other branch helps to incorporate low-level information. The multi-scale dynamic attention block empowers the network to accurately identify different-sized tree-shaped vessel patterns during the reconstruction phase in the decoder. The statistical spatial attention block improves the feature learning capability. By effectively integrating low-level and high-level semantic information, DYStA-RetNet significantly improves the performance of vessel segmentation.
Experiments performed on four benchmark datasets (DRIVE, STARE, CHASEDB, and HRF) exhibit the adaptability of DYStA-RetNet for clinical applications with a significantly smaller number of trainable parameters (37.19K) and GFLOPS (0.75), and superior segmentation performance.
The proposed lightweight CNN-based DYStA-RetNet efficiently extracts complex retinal vascular components from fundus images. It is computationally efficient and deployable in resource-constrained environments.
准确提取视网膜血管成分对于视网膜疾病的诊断和治疗至关重要。由于视网膜血管结构复杂且与其他解剖特征重叠,实现视网膜血管的精确分割具有挑战性。现有的深度神经网络在血管分支处常出现误报或遗漏脆弱血管模式的情况。此外,由于计算复杂度高,将现有模型部署在资源受限的环境中也具有挑战性。本文提出了一种基于注意力且计算高效的架构,以弥合这一差距,同时实现对视网膜血管结构的改进分割。
所提出的用于视网膜血管分割的基于动态统计注意力的轻量级模型(DyStA-RetNet)采用基于浅卷积神经网络的编码器-解码器架构。解码器的一个分支利用部分解码器连接编码器层和解码器层,以允许高级语义信息的传递,而另一个分支则有助于整合低级信息。多尺度动态注意力块使网络能够在解码器的重建阶段准确识别不同大小的树形血管模式。统计空间注意力块提高了特征学习能力。通过有效整合低级和高级语义信息,DyStA-RetNet显著提高了血管分割的性能。
在四个基准数据集(DRIVE、STARE、CHASEDB和HRF)上进行的实验表明,DyStA-RetNet在临床应用中具有适应性,其可训练参数数量(37.19K)和GFLOPS(0.75)显著减少,且分割性能优越。
所提出的基于轻量级卷积神经网络的DyStA-RetNet能够有效地从眼底图像中提取复杂的视网膜血管成分。它计算效率高,可部署在资源受限的环境中。