Shen Zhengyang, Vialard François-Xavier, Niethammer Marc
UNC Chapel Hill.
LIGM, UPEM.
Adv Neural Inf Process Syst. 2019 Dec;32:1098-1108.
We introduce a region-specific diffeomorphic metric mapping (RDMM) registration approach. RDMM is non-parametric, estimating spatio-temporal velocity fields which parameterize the sought-for spatial transformation. Regularization of these velocity fields is necessary. In contrast to existing non-parametric registration approaches using a fixed spatially-invariant regularization, for example, the large displacement diffeomorphic metric mapping (LDDMM) model, our approach allows for spatially-varying regularization which is advected via the estimated spatio-temporal velocity field. Hence, not only can our model capture large displacements, it does so with a spatio-temporal regularizer that keeps track of how regions deform, which is a more natural mathematical formulation. We explore a family of RDMM registration approaches: 1) a registration model where regions with separate regularizations are pre-defined (e.g., in an atlas space or for distinct foreground and background regions), 2) a registration model where a general spatially-varying regularizer is estimated, and 3) a registration model where the spatially-varying regularizer is obtained via an end-to-end trained deep learning (DL) model. We provide a variational derivation of RDMM, showing that the model can assure diffeomorphic transformations in the continuum, and that LDDMM is a particular instance of RDMM. To evaluate RDMM performance we experiment 1) on synthetic 2D data and 2) on two 3D datasets: knee magnetic resonance images (MRIs) of the Osteoarthritis Initiative (OAI) and computed tomography images (CT) of the lung. Results show that our framework achieves comparable performance to state-of-the-art image registration approaches, while providing additional information via a learned spatio-temporal regularizer. Further, our deep learning approach allows for very fast RDMM and LDDMM estimations. Code is available at https://github.com/uncbiag/registration.
我们介绍了一种区域特定的微分同胚度量映射(RDMM)配准方法。RDMM是非参数的,它估计时空速度场,这些速度场对所需的空间变换进行参数化。对这些速度场进行正则化是必要的。与现有的使用固定空间不变正则化的非参数配准方法(例如,大位移微分同胚度量映射(LDDMM)模型)不同,我们的方法允许通过估计的时空速度场进行空间变化的正则化。因此,我们的模型不仅可以捕获大位移,而且通过一个时空正则化器来实现,该正则化器可以跟踪区域如何变形,这是一种更自然的数学表述。我们探索了一系列RDMM配准方法:1)一种配准模型,其中具有单独正则化的区域是预先定义的(例如,在图谱空间中或针对不同的前景和背景区域);2)一种配准模型,其中估计一个通用的空间变化正则化器;3)一种配准模型,其中通过端到端训练的深度学习(DL)模型获得空间变化正则化器。我们给出了RDMM的变分推导,表明该模型可以确保在连续统中的微分同胚变换,并且LDDMM是RDMM的一个特殊实例。为了评估RDMM的性能,我们进行了两项实验:1)在合成二维数据上;2)在两个三维数据集上:骨关节炎倡议(OAI)的膝关节磁共振图像(MRI)和肺部计算机断层扫描图像(CT)。结果表明,我们的框架实现了与当前最先进的图像配准方法相当的性能,同时通过学习到的时空正则化器提供了额外的信息。此外,我们的深度学习方法允许非常快速地估计RDMM和LDDMM。代码可在https://github.com/uncbiag/registration获取。