Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK.
Med Image Anal. 2012 Oct;16(7):1423-35. doi: 10.1016/j.media.2012.05.008. Epub 2012 May 31.
Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations.
不同模态图像的变形配准仍然是医学图像分析中的一个具有挑战性的任务。本文针对这一重要问题,提出了一种用于线性和变形多模态配准的模态无关邻域描述符(MIND)。它基于同一幅图像内小图像补丁之间的相似性,旨在提取局部邻域中的独特结构,该结构在不同模态之间保持不变。该描述符基于图像自相似性的概念,该概念已被引入用于图像去噪的非局部均值滤波。它能够区分不同类型的特征,如角、边缘和平滑纹理区域。MIND 对模态之间最显著的差异具有鲁棒性:非功能强度关系、图像噪声和非均匀偏置场。多维描述符可以在整个图像中以密集的方式高效计算,并基于描述符之间的绝对或平方差提供跨模态的点相似性,从而使其适用于广泛的变换模型和优化算法。我们使用图像 MIND 表示的平方和差异作为对称非参数高斯牛顿配准框架内的相似性度量。原则上,MIND 将适用于任意模态的配准。在这项工作中,我们将其应用于吸气和呼气时的临床 3D 胸部 CT 扫描的配准以及 3D CT 和 MRI 扫描的对齐,并验证了其有效性。实验结果表明,MIND 在临床注释的地标位置方面优于条件互信息和熵图像等最先进的技术。