Mohseni Salehi Seyed Sadegh, Erdogmus Deniz, Gholipour Ali
IEEE Trans Med Imaging. 2017 Nov;36(11):2319-2330. doi: 10.1109/TMI.2017.2721362. Epub 2017 Jun 28.
Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.
脑提取或全脑分割是许多神经影像分析流程中的重要第一步。因此,脑提取的准确性和稳健性对于整个脑分析过程的准确性至关重要。当前最先进的脑提取技术严重依赖于脑图谱与待分析脑解剖结构之间配准的准确性,和/或对图像几何形状做出假设,因此当这些假设不成立或图像配准失败时,成功率有限。为了设计一种准确的、基于学习的、与几何无关且无需配准的脑提取工具,在本文中,我们提出了一种基于自动上下文卷积神经网络(CNN)的技术,其中通过不同窗口大小的二维图像块来学习内在的局部和全局图像特征。我们考虑了两种不同的架构:1)一种基于三个并行二维卷积路径的体素级方法,用于三个不同方向(轴向、冠状和矢状),该方法无需进行计算成本高昂的三维卷积即可隐式学习三维图像信息;2)一种基于U-net架构的全卷积网络。网络生成的后验概率图与原始图像块一起被迭代用作上下文信息,以学习脑的局部形状和连通性,从而将其与非脑组织区分开来。我们从卷积神经网络获得的脑提取结果在两个公开可用的基准数据集(即LPBA40和OASIS)上优于文献中最近报道的结果,在这两个数据集中,我们分别获得了97.73%和97.62%的骰子重叠系数。通过我们的自动上下文算法实现了显著的改进。此外,我们在从重建的胎儿脑磁共振成像(MRI)数据集中提取任意方向的胎儿脑这一具有挑战性的问题上评估了我们算法的性能。在这个应用中,我们的体素级自动上下文卷积神经网络的表现比其他方法好得多(骰子系数:95.97%),其他方法由于胎儿脑在MRI中的非标准方向和几何形状而表现不佳。通过训练,我们的方法可以在具有挑战性的应用中提供准确的脑提取。这反过来可能会减少分割任务中与图像配准相关的问题。