Bhalodia Riddhish, Elhabian Shireen Y, Kavan Ladislav, Whitaker Ross T
Scientific Computing and Imaging Institute, University of Utah.
School of Computing, University of Utah.
Med Image Comput Comput Assist Interv. 2019 Oct;11765:391-400. doi: 10.1007/978-3-030-32245-8_44. Epub 2019 Oct 10.
Spatial transformations are enablers in a variety of medical image analysis applications that entail aligning images to a common coordinate systems. Population analysis of such transformations is expected to capture the underlying image and shape variations, and hence these transformations are required to produce correspondences. This is usually enforced through some smoothness-based generic metric or regularization of the deformation field. Alternatively, population-based regularization has been shown to produce anatomically accurate correspondences in cases where anatomically unaware (i.e., data independent) regularization fail. Recently, deep networks have been used to generate spatial transformations in an unsupervised manner, and, once trained, these networks are computationally faster and as accurate as conventional, optimization-based registration methods. However, the deformation fields produced by these networks require smoothness penalties, just as the conventional registration methods, and ignores population-level statistics of the transformations. Here, we propose a novel neural network architecture that simultaneously learns and uses the population-level statistics of the spatial transformations to regularize the neural networks for unsupervised image registration. This regularization is in the form of a bottleneck autoencoder, which learns and adapts to the population of transformations required to align input images by encoding the transformations to a low dimensional manifold. The proposed architecture produces deformation fields that describe the population-level features and associated correspondences in an anatomically relevant manner and are statistically compact relative to the state-of-the-art approaches while maintaining computational efficiency. We demonstrate the efficacy of the proposed architecture on synthetic data sets, as well as 2D and 3D medical data.
空间变换是多种医学图像分析应用中的促成因素,这些应用需要将图像对齐到共同的坐标系。对这种变换进行总体分析有望捕捉潜在的图像和形状变化,因此需要这些变换来产生对应关系。这通常通过一些基于平滑度的通用度量或变形场的正则化来实现。另外,在解剖学上无感知(即数据独立)的正则化失败的情况下,基于总体的正则化已被证明能产生解剖学上准确的对应关系。最近,深度网络已被用于以无监督的方式生成空间变换,并且一旦训练完成,这些网络在计算上比传统的基于优化的配准方法更快且同样准确。然而,与传统配准方法一样,这些网络产生的变形场需要平滑度惩罚,并且忽略了变换的总体水平统计信息。在此,我们提出一种新颖的神经网络架构,该架构同时学习并使用空间变换的总体水平统计信息来对用于无监督图像配准的神经网络进行正则化。这种正则化采用瓶颈自动编码器的形式,它通过将变换编码到低维流形来学习并适应对齐输入图像所需的变换总体。所提出的架构产生的变形场以解剖学相关的方式描述总体水平特征和相关对应关系,并且相对于现有技术方法在统计上更为紧凑,同时保持计算效率。我们在合成数据集以及二维和三维医学数据上证明了所提出架构的有效性。