Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China.
Sensors (Basel). 2019 Oct 28;19(21):4675. doi: 10.3390/s19214675.
The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medical images to transform multi-modal image registration into mono-modal one. The sum of absolute differences between structural representations is computed as the similarity measure. Subsequently, the foveated MIND based spatial constraint is introduced into the Markov random field (MRF) optimization to reduce the number of transformation parameters and restrict the calculation of the energy function in the image region involving non-rigid deformation. Finally, the accurate and efficient 3D medical image registration is realized by minimizing the similarity measure based MRF energy function. Extensive experiments on 3D positron emission tomography (PET), computed tomography (CT), T1, T2, and PD weighted magnetic resonance (MR) images with synthetic deformation demonstrate that the proposed method has higher computational efficiency and registration accuracy in terms of target registration error (TRE) than the registration methods that are based on the hybrid L-BFGS-B and cat swarm optimization (HLCSO), the sum of squared differences on entropy images, the MIND, and the self-similarity context (SSC) descriptor, except that it provides slightly bigger TRE than the HLCSO for CT-PET image registration. Experiments on real MR and ultrasound images with unknown deformation have also be done to demonstrate the practicality and superiority of the proposed method.
非刚性多模态三维(3D)医学图像配准极具挑战性,因为在构建相似性度量和求解非刚性变换参数方面存在困难。提出了一种新的基于结构表示的配准方法来解决这些问题。首先,设计了一种基于注视型非局部自相似性的改进模态独立邻域描述符(MIND),用于 3D 医学图像的有效结构表示,将多模态图像配准转化为单模态配准。然后,将基于注视型 MIND 的空间约束引入马尔可夫随机场(MRF)优化中,以减少变换参数的数量,并限制非刚性变形区域的能量函数计算。最后,通过最小化基于相似性度量的 MRF 能量函数,实现了精确高效的 3D 医学图像配准。在具有合成变形的 3D 正电子发射断层扫描(PET)、计算机断层扫描(CT)、T1、T2 和 PD 加权磁共振(MR)图像上的广泛实验表明,与基于混合 L-BFGS-B 和猫群优化(HLCSO)、基于熵图像平方和的方法、MIND 和自相似上下文(SSC)描述符的方法相比,该方法在目标配准误差(TRE)方面具有更高的计算效率和配准精度,除了在 CT-PET 图像配准方面比 HLCSO 提供稍大的 TRE 外。还对具有未知变形的真实 MR 和超声图像进行了实验,以证明所提出方法的实用性和优越性。