Kao Po-Yu, Shailja Shailja, Jiang Jiaxiang, Zhang Angela, Khan Amil, Chen Jefferson W, Manjunath B S
Vision Research Lab, Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States.
Department of Neurological Surgery, University of California, Irvine, Irvine, CA, United States.
Front Neurosci. 2020 Jan 24;13:1449. doi: 10.3389/fnins.2019.01449. eCollection 2019.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks.
手动脑肿瘤标注过程既耗时又耗资源,因此,对自动化且准确的脑肿瘤分割工具需求巨大。在本文中,我们介绍了一种将位置信息与基于切片的先进神经网络相结合的新方法,用于脑肿瘤分割。这是基于以下观察结果:病变并非均匀分布在不同的脑分区区域,并且局部敏感分割可能会获得更好的分割精度。为此,我们使用蒙特利尔神经病学研究所(MNI)空间中的现有脑分区图谱,并将该图谱映射到个体受试者数据。在受试者数据空间中映射的图谱与结构磁共振(MR)成像数据相结合,并训练包括3D U-Net和DeepMedic在内的基于切片的神经网络,以对不同的脑病变进行分类。在所提出的两级集成方法中,训练多个先进的神经网络并与XGBoost融合。第一级减少了不同种子初始化的同一类型模型的不确定性,第二级利用了不同类型神经网络模型的优势。所提出的位置信息融合方法提高了包括3D U-Net和DeepMedic在内的先进网络的分割性能。与BraTS 2017中的先进网络相比,我们提出的集成方法也实现了更好的分割性能,并且在BraTS 2018中与先进网络相媲美。在公共多模态脑肿瘤分割(BraTS)基准上提供了详细结果。