Li Zhan, Zhang Chunxia, Zhang Yongqin, Wang Xiaofeng, Ma Xiaolong, Zhang Hai, Wu Songdi
School of Information Science and Technology, Northwest University, 710127, Xi'an, China.
School of Information Science and Technology, Northwest University, 710127, Xi'an, China.
Med Image Anal. 2023 Apr;85:102710. doi: 10.1016/j.media.2022.102710. Epub 2022 Dec 21.
Brain tissue segmentation is of great value in diagnosing brain disorders. Three-dimensional (3D) and two-dimensional (2D) segmentation methods for brain Magnetic Resonance Imaging (MRI) suffer from high time complexity and low segmentation accuracy, respectively. To address these two issues, we propose a Context-assisted full Attention Network (CAN) for brain MRI segmentation by integrating 2D and 3D data of MRI. Different from the fully symmetric structure U-Net, the CAN takes the current 2D slice, its 3D contextual skull slices and 3D contextual brain slices as the input, which are further encoded by the DenseNet and decoded by our constructed full attention network. We have validated the effectiveness of the CAN on our collected dataset PWML and two public datasets dHCP2017 and MALC2012. Our code is available at https://github.com/nwuAI/CAN.
脑组织分割在脑部疾病诊断中具有重要价值。用于脑磁共振成像(MRI)的三维(3D)和二维(2D)分割方法分别存在时间复杂度高和分割精度低的问题。为了解决这两个问题,我们通过整合MRI的2D和3D数据,提出了一种用于脑MRI分割的上下文辅助全注意力网络(CAN)。与完全对称结构的U-Net不同,CAN将当前的2D切片、其3D上下文颅骨切片和3D上下文脑切片作为输入,这些输入由DenseNet进一步编码,并由我们构建的全注意力网络进行解码。我们已经在我们收集的数据集PWML以及两个公共数据集dHCP2017和MALC2012上验证了CAN的有效性。我们的代码可在https://github.com/nwuAI/CAN获取。