Yi Zhao, Soatto Stefano
University of California, Los Angeles, USA.
Inf Process Med Imaging. 2011;22:424-35. doi: 10.1007/978-3-642-22092-0_35.
We propose a method to efficiently compute mutual information between high-dimensional distributions of image patches. This in turn is used to perform accurate registration of images captured under different modalities, while exploiting their local structure otherwise missed in traditional mutual information definition. We achieve this by organizing the space of image patches into orbits under the action of Euclidean transformations of the image plane, and estimating the modes of a distribution in such an orbit space using affinity propagation. This way, large collections of patches that are equivalent up to translations and rotations are mapped to the same representative, or "dictionary element". We then show analytically that computing mutual information for a joint distribution in this space reduces to computing mutual information between the (scalar) label maps, and between the transformations mapping each patch into its closest dictionary element. We show that our approach improves registration performance compared with the state of the art in multimodal registration, using both synthetic and real images with quantitative ground truth.
我们提出了一种方法,用于高效计算图像块高维分布之间的互信息。进而,利用该互信息对在不同模态下采集的图像进行精确配准,同时利用其局部结构,而这在传统互信息定义中是被忽略的。我们通过将图像块空间组织成图像平面欧几里得变换作用下的轨道,并使用亲和传播估计这种轨道空间中分布的模式来实现这一点。通过这种方式,在平移和旋转下等价的大量图像块集合被映射到同一个代表元素,即“字典元素”。然后我们通过分析表明,计算该空间中联合分布的互信息简化为计算(标量)标签映射之间以及将每个图像块映射到其最接近字典元素的变换之间的互信息。我们表明,使用具有定量真实数据的合成图像和真实图像,与多模态配准中的现有技术相比,我们的方法提高了配准性能。