Zhao Liya, Jia Kebin
Multimedia Information Processing Group, College of Electronic Information & Control Engineering, Beijing University of Technology, Beijing, China.
Comput Math Methods Med. 2016;2016:8356294. doi: 10.1155/2016/8356294. Epub 2016 Mar 16.
Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.
早期脑肿瘤检测与诊断对临床至关重要。因此,聚焦肿瘤区域的分割需要准确、高效且稳健。在本文中,我们提出了一种基于卷积神经网络(CNN)的自动脑肿瘤分割方法。传统的CNN仅关注局部特征,而忽略了对像素分类和识别都很重要的全局区域特征。此外,脑肿瘤可出现在大脑的任何位置,在患者中具有任何大小和形状。我们设计了一个名为多尺度CNN的三流框架,该框架可以自动检测图像大小的最佳前三尺度,并结合来自该像素周围不同尺度区域的信息。由2013年医学图像计算与计算机辅助干预国际会议(MICCAI)组织的多模态脑肿瘤图像分割基准(BRATS)提供的数据集用于训练和测试。所设计的多尺度CNN框架还结合了来自T1、T1增强、T2和液体衰减反转恢复(FLAIR)磁共振成像(MRI)图像的多模态特征。通过与传统CNN以及2012年和2013年BRATS中最好的两种方法进行比较,我们的框架在脑肿瘤分割准确性和稳健性方面显示出优势。