Guan Xin, Zhao Yushan, Nyatega Charles Okanda, Li Qiang
School of Microelectronics, Tianjin University, Tianjin 300072, China.
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China.
Brain Sci. 2023 Apr 11;13(4):650. doi: 10.3390/brainsci13040650.
Accurate segmentation of brain tumors from magnetic resonance 3D images (MRI) is critical for clinical decisions and surgical planning. Radiologists usually separate and analyze brain tumors by combining images of axial, coronal, and sagittal views. However, traditional convolutional neural network (CNN) models tend to use information from only a single view or one by one. Moreover, the existing models adopt a multi-branch structure with different-size convolution kernels in parallel to adapt to various tumor sizes. However, the difference in the convolution kernels' parameters cannot precisely characterize the feature similarity of tumor lesion regions with various sizes, connectivity, and convexity. To address the above problems, we propose a hierarchical multi-view convolution method that decouples the standard 3D convolution into axial, coronal, and sagittal views to provide complementary-view features. Then, every pixel is classified by ensembling the discriminant results from the three views. Moreover, we propose a multi-branch kernel-sharing mechanism with a dilated rate to obtain parameter-consistent convolution kernels with different receptive fields. We use the BraTS2018 and BraTS2020 datasets for comparison experiments. The average Dice coefficients of the proposed network on the BraTS2020 dataset can reach 78.16%, 89.52%, and 83.05% for the enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively, while the number of parameters is only 0.5 M. Compared with the baseline network for brain tumor segmentation, the accuracy was improved by 1.74%, 0.5%, and 2.19%, respectively.
从磁共振3D图像(MRI)中准确分割脑肿瘤对于临床决策和手术规划至关重要。放射科医生通常通过结合轴向、冠状和矢状视图的图像来分离和分析脑肿瘤。然而,传统的卷积神经网络(CNN)模型往往只使用单个视图的信息或逐个使用。此外,现有模型采用具有不同大小卷积核的多分支结构并行,以适应各种肿瘤大小。然而,卷积核参数的差异不能精确表征不同大小、连通性和凸性的肿瘤病变区域的特征相似性。为了解决上述问题,我们提出了一种分层多视图卷积方法,将标准3D卷积解耦为轴向、冠状和矢状视图,以提供互补视图特征。然后,通过整合三个视图的判别结果对每个像素进行分类。此外,我们提出了一种具有扩张率的多分支内核共享机制,以获得具有不同感受野的参数一致的卷积核。我们使用BraTS2018和BraTS2020数据集进行比较实验。所提出的网络在BraTS2020数据集上对于增强肿瘤(ET)、全肿瘤(WT)和肿瘤核心(TC)的平均Dice系数分别可达78.16%、89.52%和83.05%,而参数数量仅为0.5M。与脑肿瘤分割的基线网络相比,准确率分别提高了1.74%、0.5%和2.19%。