Ge Hongkun, Wu Guorong, Wang Li, Gao Yaozong, Shen Dinggang
Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
Mach Learn Med Imaging. 2015 Oct 5;9352:203-211. doi: 10.1007/978-3-319-24888-2_25.
Mutual information (MI) has been widely used for registering images with different modalities. Since most inter-modality registration methods simply estimate deformations in a local scale, but optimizing MI from the entire image, the estimated deformations for certain structures could be dominated by the surrounding unrelated structures. Also, since there often exist multiple structures in each image, the intensity correlation between two images could be complex and highly nonlinear, which makes global MI unable to precisely guide local image deformation. To solve these issues, we propose a hierarchical inter-modality registration method by robust feature matching. Specifically, we first select a small set of key points at salient image locations to drive the entire image registration. Since the original image features computed from different modalities are often difficult for direct comparison, we propose to learn their common feature representations by projecting them from their native feature spaces to a common space, where the correlations between corresponding features are maximized. Due to the large heterogeneity between two high-dimension feature distributions, we employ Kernel CCA (Canonical Correlation Analysis) to reveal such non-linear feature mappings. Then, our registration method can take advantage of the learned common features to reliably establish correspondences for key points from different modality images by robust feature matching. As more and more key points take part in the registration, our hierarchical feature-based image registration method can efficiently estimate the deformation pathway between two inter-modality images in a global to local manner. We have applied our proposed registration method to prostate CT and MR images, as well as the infant MR brain images in the first year of life. Experimental results show that our method can achieve more accurate registration results, compared to other state-of-the-art image registration methods.
互信息(MI)已被广泛用于不同模态图像的配准。由于大多数跨模态配准方法仅在局部尺度上估计变形,而从整个图像优化互信息,某些结构的估计变形可能会被周围不相关的结构主导。此外,由于每个图像中通常存在多个结构,两幅图像之间的强度相关性可能很复杂且高度非线性,这使得全局互信息无法精确指导局部图像变形。为了解决这些问题,我们提出了一种基于鲁棒特征匹配的分层跨模态配准方法。具体来说,我们首先在显著图像位置选择一小部分关键点来驱动整个图像配准。由于从不同模态计算出的原始图像特征通常难以直接比较,我们建议通过将它们从其原生特征空间投影到一个公共空间来学习它们的公共特征表示,在这个公共空间中,对应特征之间的相关性最大化。由于两个高维特征分布之间存在很大的异质性,我们采用核典型相关分析(Kernel CCA)来揭示这种非线性特征映射。然后,我们的配准方法可以利用学习到的公共特征,通过鲁棒特征匹配可靠地建立不同模态图像关键点之间的对应关系。随着越来越多的关键点参与配准,我们基于分层特征的图像配准方法可以以全局到局部的方式有效地估计两个跨模态图像之间的变形路径。我们已将我们提出的配准方法应用于前列腺CT和MR图像,以及一岁以内婴儿的MR脑图像。实验结果表明,与其他现有最先进的图像配准方法相比,我们的方法可以获得更准确的配准结果。