National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, China.
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
Med Image Anal. 2018 Jan;43:98-111. doi: 10.1016/j.media.2017.10.002. Epub 2017 Oct 5.
Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans.
准确可靠的脑肿瘤分割是癌症诊断、治疗计划和治疗效果评估的关键组成部分。本研究基于成功的深度学习技术,将全卷积神经网络(FCNNs)和条件随机场(CRFs)集成到一个统一的框架中,开发了一种新的脑肿瘤分割方法,以获得具有外观和空间一致性的分割结果。我们通过以下步骤使用 2D 图像补丁和图像切片来训练基于深度学习的分割模型:1)使用图像补丁训练 FCNNs;2)使用具有固定 FCNNs 参数的图像切片训练作为递归神经网络(CRF-RNN)的 CRFs;3)使用图像切片微调 FCNNs 和 CRF-RNN。特别地,我们使用分别在轴位、冠状位和矢状位获得的 2D 图像补丁和切片训练了 3 个分割模型,并使用基于投票的融合策略对脑肿瘤进行分割。与基于图像补丁的方法相比,我们的方法可以逐片分割脑图像,速度更快。我们基于 2013 年、2015 年和 2016 年多模态脑肿瘤图像分割挑战赛(BRATS)提供的成像数据评估了我们的方法。实验结果表明,我们的方法可以在 Flair、T1c 和 T2 扫描的基础上建立分割模型,并与在 Flair、T1、T1c 和 T2 扫描的基础上建立的分割模型具有竞争力的性能。