Hua Cam-Hao, Huynh-The Thien, Lee Sungyoung
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:36-39. doi: 10.1109/EMBC.2019.8856552.
With the recent advent of deep learning in medical image processing, retinal blood vessel segmentation topic has been comprehensively handled by numerous research works. However, since the ratio between the number of vessel and background pixels is heavily imbalanced, many attempts utilized patches augmented from original fundus images along with fully convolutional networks for addressing such pixel-wise labeling problem, which significantly costs computational resources. In this paper, a method using Round-wise Features Aggregation on Bracket-shaped convolutional neural networks (RFA-BNet) is proposed to exclude the necessity of patches augmentation while efficiently handling the irregular and diverse representation of retinal vessels. Particularly, given raw fundus images, typical feature maps extracted from a pretrained backbone network are employed for a bracket-shaped decoder, wherein middle-scale features are continuously exploited round-by-round. Then, the decoded maps having highest resolution of each round are aggregated to enable the built model to flexibly learn various degrees of embedded semantic details while retaining proper annotations of thin and small vessels. Finally, the proposed approach showed its effectiveness in terms of sensitivity (0.7932), specificity (0.9741), accuracy (0.9511), and AUROC (0.9732) on DRIVE dataset.
随着深度学习在医学图像处理中的最新出现,视网膜血管分割主题已被众多研究工作全面处理。然而,由于血管像素与背景像素的数量之比严重失衡,许多尝试利用从原始眼底图像增强的补丁以及全卷积网络来解决这种逐像素标记问题,这显著消耗计算资源。在本文中,提出了一种在括号形卷积神经网络上使用轮次特征聚合的方法(RFA-BNet),以排除补丁增强的必要性,同时有效地处理视网膜血管不规则和多样的表示。具体而言,给定原始眼底图像,从预训练的主干网络提取的典型特征图被用于括号形解码器,其中中尺度特征被一轮一轮地持续利用。然后,聚合每一轮具有最高分辨率的解码图,以使构建的模型能够灵活地学习各种程度的嵌入语义细节,同时保留细小血管的适当注释。最后,所提出的方法在DRIVE数据集上的灵敏度(0.7932)、特异性(0.9741)、准确率(0.9511)和曲线下面积(0.9732)方面显示出其有效性。