Piella Gemma
Department of Information & Communication Technologies, Universitat Pompeu Fabra, Barcelona 08018, Spain.
Sensors (Basel). 2014 Jun 16;14(6):10562-77. doi: 10.3390/s140610562.
Multimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information.
多模态图像配准是一项艰巨的任务,因为图像之间存在显著的强度变化。一种常见的方法是使用复杂的相似性度量,如互信息,它对这些强度变化具有鲁棒性。然而,这些相似性度量计算成本高昂,而且往往无法捕捉与图像相关的几何结构和动态信息。另一种方法是将图像转换到一个可以直接比较模态的公共空间。在这种方法中,我们建议使用扩散映射来描述数据的几何和光谱特性,从而对多模态图像进行配准。通过扩散映射,多模态数据被转换为一组新的规范坐标,这些坐标在各模态间均匀地反映其几何结构,以便在它们之间建立有意义的对应关系。然后,可以使用简单的欧几里得距离作为相似性度量,对这种新表示形式中的图像进行配准。我们在具有已知真实情况的真实和模拟脑图像上,对刚性和非刚性配准的配准精度进行了评估。结果表明,与使用互信息的传统方法相比,所提出的方法具有更高的精度。