Bhimavarapu Usharani
Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India.
J Imaging Inform Med. 2025 Feb;38(1):520-533. doi: 10.1007/s10278-024-01219-2. Epub 2024 Aug 8.
Segmenting retinal blood vessels poses a significant challenge due to the irregularities inherent in small vessels. The complexity arises from the intricate task of effectively merging features at multiple levels, coupled with potential spatial information loss during successive down-sampling steps. This particularly affects the identification of small and faintly contrasting vessels. To address these challenges, we present a model tailored for automated arterial and venous (A/V) classification, complementing blood vessel segmentation. This paper presents an advanced methodology for segmenting and classifying retinal vessels using a series of sophisticated pre-processing and feature extraction techniques. The ensemble filter approach, incorporating Bilateral and Laplacian edge detectors, enhances image contrast and preserves edges. The proposed algorithm further refines the image by generating an orientation map. During the vessel extraction step, a complete convolution network processes the input image to create a detailed vessel map, enhanced by attention operations that improve modeling perception and resilience. The encoder extracts semantic features, while the Attention Module refines blood vessel depiction, resulting in highly accurate segmentation outcomes. The model was verified using the STARE dataset, which includes 400 images; the DRIVE dataset with 40 images; the HRF dataset with 45 images; and the INSPIRE-AVR dataset containing 40 images. The proposed model demonstrated superior performance across all datasets, achieving an accuracy of 97.5% on the DRIVE dataset, 99.25% on the STARE dataset, 98.33% on the INSPIREAVR dataset, and 98.67% on the HRF dataset. These results highlight the method's effectiveness in accurately segmenting and classifying retinal vessels.
由于小血管固有的不规则性,分割视网膜血管面临重大挑战。复杂性源于在多个层次上有效合并特征这一复杂任务,以及在连续下采样步骤中潜在的空间信息损失。这尤其影响小血管和对比度微弱的血管的识别。为应对这些挑战,我们提出了一种专门用于自动动脉和静脉(A/V)分类的模型,作为血管分割的补充。本文提出了一种先进的方法,使用一系列复杂的预处理和特征提取技术对视网膜血管进行分割和分类。结合双边和拉普拉斯边缘检测器的集成滤波器方法增强了图像对比度并保留了边缘。所提出的算法通过生成方向图进一步细化图像。在血管提取步骤中,一个完整的卷积网络处理输入图像以创建详细的血管图,并通过注意力操作增强,这些操作改善了建模感知和弹性。编码器提取语义特征,而注意力模块细化血管描绘,从而产生高度准确的分割结果。该模型使用STARE数据集(包括400张图像)、DRIVE数据集(40张图像)、HRF数据集(45张图像)以及包含40张图像的INSPIRE - AVR数据集进行了验证。所提出的模型在所有数据集上均表现出卓越性能,在DRIVE数据集上准确率达到97.5%,在STARE数据集上为99.25%,在INSPIRE - AVR数据集上为98.33%,在HRF数据集上为98.67%。这些结果突出了该方法在准确分割和分类视网膜血管方面的有效性。