Li Yuxin, Ren Tong, Li Junhuai, Li Xiangning, Li Anan
Shaanxi Key Laboratory of Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, 710048, China.
Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, China.
Biomed Opt Express. 2022 Jun 1;13(6):3657-3671. doi: 10.1364/BOE.458111.
The popularity of fluorescent labelling and mesoscopic optical imaging techniques enable the acquisition of whole mammalian brain vasculature images at capillary resolution. Segmentation of the cerebrovascular network is essential for analyzing the cerebrovascular structure and revealing the pathogenesis of brain diseases. Existing deep learning methods use a single type of annotated labels with the same pixel weight to train the neural network and segment vessels. Due to the variation in the shape, density and brightness of vessels in whole-brain fluorescence images, it is difficult for a neural network trained with a single type of label to segment all vessels accurately. To address this problem, we proposed a deep learning cerebral vasculature segmentation framework based on multi-perspective labels. First, the pixels in the central region of thick vessels and the skeleton region of vessels were extracted separately using morphological operations based on the binary annotated labels to generate two different labels. Then, we designed a three-stage 3D convolutional neural network containing three sub-networks, namely thick-vessel enhancement network, vessel skeleton enhancement network and multi-channel fusion segmentation network. The first two sub-networks were trained by the two labels generated in the previous step, respectively, and pre-segmented the vessels. The third sub-network was responsible for fusing the pre-segmented results to precisely segment the vessels. We validated our method on two mouse cerebral vascular datasets generated by different fluorescence imaging modalities. The results showed that our method outperforms the state-of-the-art methods, and the proposed method can be applied to segment the vasculature on large-scale volumes.
荧光标记和介观光学成像技术的普及使得能够以毛细血管分辨率获取整个哺乳动物脑血管图像。脑血管网络的分割对于分析脑血管结构和揭示脑部疾病的发病机制至关重要。现有的深度学习方法使用单一类型的带注释标签,且具有相同的像素权重来训练神经网络并分割血管。由于全脑荧光图像中血管的形状、密度和亮度存在差异,使用单一类型标签训练的神经网络很难准确分割所有血管。为了解决这个问题,我们提出了一种基于多视角标签的深度学习脑血管分割框架。首先,基于二进制注释标签,使用形态学操作分别提取粗血管中心区域和血管骨架区域的像素,以生成两种不同的标签。然后,我们设计了一个包含三个子网络的三阶段3D卷积神经网络,即粗血管增强网络、血管骨架增强网络和多通道融合分割网络。前两个子网络分别由上一步生成的两个标签进行训练,并对血管进行预分割。第三个子网络负责融合预分割结果以精确分割血管。我们在由不同荧光成像模态生成的两个小鼠脑血管数据集上验证了我们的方法。结果表明,我们的方法优于现有方法,并且所提出的方法可应用于大规模体积的血管分割。