Suppr超能文献

通过空间上下文互信息进行多模态配准。

Multimodal registration via spatial-context mutual information.

作者信息

Yi Zhao, Soatto Stefano

机构信息

University of California, Los Angeles, USA.

出版信息

Inf Process Med Imaging. 2011;22:424-35. doi: 10.1007/978-3-642-22092-0_35.

Abstract

We propose a method to efficiently compute mutual information between high-dimensional distributions of image patches. This in turn is used to perform accurate registration of images captured under different modalities, while exploiting their local structure otherwise missed in traditional mutual information definition. We achieve this by organizing the space of image patches into orbits under the action of Euclidean transformations of the image plane, and estimating the modes of a distribution in such an orbit space using affinity propagation. This way, large collections of patches that are equivalent up to translations and rotations are mapped to the same representative, or "dictionary element". We then show analytically that computing mutual information for a joint distribution in this space reduces to computing mutual information between the (scalar) label maps, and between the transformations mapping each patch into its closest dictionary element. We show that our approach improves registration performance compared with the state of the art in multimodal registration, using both synthetic and real images with quantitative ground truth.

摘要

我们提出了一种方法,用于高效计算图像块高维分布之间的互信息。进而,利用该互信息对在不同模态下采集的图像进行精确配准,同时利用其局部结构,而这在传统互信息定义中是被忽略的。我们通过将图像块空间组织成图像平面欧几里得变换作用下的轨道,并使用亲和传播估计这种轨道空间中分布的模式来实现这一点。通过这种方式,在平移和旋转下等价的大量图像块集合被映射到同一个代表元素,即“字典元素”。然后我们通过分析表明,计算该空间中联合分布的互信息简化为计算(标量)标签映射之间以及将每个图像块映射到其最接近字典元素的变换之间的互信息。我们表明,使用具有定量真实数据的合成图像和真实图像,与多模态配准中的现有技术相比,我们的方法提高了配准性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验