Suppr超能文献

R2Net:使用 Lipschitz 连续残差网络的高效灵活的保形图像配准。

R2Net: Efficient and flexible diffeomorphic image registration using Lipschitz continuous residual networks.

机构信息

School of Computing, University of Georgia, Athens, 30602, USA.

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.

出版信息

Med Image Anal. 2023 Oct;89:102917. doi: 10.1016/j.media.2023.102917. Epub 2023 Aug 1.

Abstract

Classical diffeomorphic image registration methods, while being accurate, face the challenges of high computational costs. Deep learning based approaches provide a fast alternative to address these issues; however, most existing deep solutions either lose the good property of diffeomorphism or have limited flexibility to capture large deformations, under the assumption that deformations are driven by stationary velocity fields (SVFs). Also, the adopted squaring and scaling technique for integrating SVFs is time- and memory-consuming, hindering deep methods from handling large image volumes. In this paper, we present an unsupervised diffeomorphic image registration framework, which uses deep residual networks (ResNets) as numerical approximations of the underlying continuous diffeomorphic setting governed by ordinary differential equations, which is parameterized by either SVFs or time-varying (non-stationary) velocity fields. This flexible parameterization in our Residual Registration Network (R2Net) not only provides the model's ability to capture large deformation but also reduces the time and memory cost when integrating velocity fields for deformation generation. Also, we introduce a Lipschitz continuity constraint into the ResNet block to help achieve diffeomorphic deformations. To enhance the ability of our model for handling images with large volume sizes, we employ a hierarchical extension with a multi-phase learning strategy to solve the image registration task in a coarse-to-fine fashion. We demonstrate our models on four 3D image registration tasks with a wide range of anatomies, including brain MRIs, cine cardiac MRIs, and lung CT scans. Compared to classical methods SyN and diffeomorphic VoxelMorph, our models achieve comparable or better registration accuracy with much smoother deformations. Our source code is available online at https://github.com/ankitajoshi15/R2Net.

摘要

经典的变分图像配准方法虽然准确,但面临着计算成本高的挑战。基于深度学习的方法提供了一种快速的替代方法来解决这些问题;然而,大多数现有的深度学习解决方案要么失去了变分同胚的良好性质,要么在假设变形是由固定速度场 (SVFs) 驱动的情况下,对大变形的捕捉能力有限。此外,用于整合 SVFs 的平方和缩放技术既耗时又耗内存,阻碍了深度学习方法处理大图像体积。在本文中,我们提出了一种无监督的变分图像配准框架,该框架使用深度残差网络 (ResNets) 作为由常微分方程控制的底层连续变分设置的数值逼近,该设置由 SVFs 或时变(非平稳)速度场参数化。我们的 Residual Registration Network (R2Net) 中的这种灵活的参数化不仅提供了模型捕捉大变形的能力,而且还减少了生成变形时整合速度场的时间和内存成本。此外,我们在 ResNet 块中引入了 Lipschitz 连续性约束,以帮助实现变分变形。为了增强我们的模型处理大体积图像的能力,我们采用分层扩展和多阶段学习策略,以粗到精的方式解决图像配准任务。我们在包括脑 MRI、电影心脏 MRI 和肺 CT 扫描在内的广泛解剖结构的四个 3D 图像配准任务上展示了我们的模型。与经典方法 SyN 和 diffeomorphic VoxelMorph 相比,我们的模型在具有更平滑变形的情况下实现了可比或更好的注册精度。我们的源代码可在 https://github.com/ankitajoshi15/R2Net 上获得。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验