Suppr超能文献

F3RNet:用于可变形图像配准的全分辨率残差配准网络。

F3RNet: full-resolution residual registration network for deformable image registration.

机构信息

Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.

Brigham and Women's Hospital, Harvard Medical School, Boston, 02115, USA.

出版信息

Int J Comput Assist Radiol Surg. 2021 Jun;16(6):923-932. doi: 10.1007/s11548-021-02359-4. Epub 2021 May 3.

Abstract

PURPOSE

Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes.

METHODS

We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency.

RESULTS

We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches.

CONCLUSION

By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.

摘要

目的

变形图像配准(DIR)对于许多图像引导治疗至关重要。最近,深度学习方法在 DIR 中得到了广泛的应用和成功。大多数深度学习方法使用所谓的单流高到低、低到高的网络结构,可以实现令人满意的整体配准结果。然而,对于一些严重变形的局部区域,如对手术目标进行精确定位,准确的配准往往被忽视。因此,这些方法对一些难以配准的区域不敏感,例如变形肝脏的患者内配准。

方法

我们提出了一种新的无监督配准网络,即全分辨率残差配准网络(F3RNet),用于严重变形器官的变形配准。该方法以残差学习的方式结合了两个并行处理流。一个流利用全分辨率信息,有利于精确的体素级配准。另一个流学习深层多尺度残差表示,以获得稳健的识别。我们还对 3D 卷积进行因式分解,以减少训练参数并提高网络效率。

结果

我们在一个临床获得的患者内腹部 CT-MRI 数据集和一个公共吸气和呼气胸部 CT 数据集上验证了所提出的方法。与最先进的方法相比,多模态和单模态配准实验都取得了有希望的结果。

结论

通过以高度交互的残差学习方式结合高分辨率信息和多尺度表示,所提出的 F3RNet 可以实现准确的整体和局部配准。使用 GPU 注册一对图像的运行时间小于 3 秒。在未来的工作中,我们将研究如何有效地处理高分辨率信息和融合多尺度表示。

相似文献

1
9
Adversarial learning for mono- or multi-modal registration.对抗学习的单模态或多模态配准。
Med Image Anal. 2019 Dec;58:101545. doi: 10.1016/j.media.2019.101545. Epub 2019 Aug 24.

引用本文的文献

4
A review of deep learning-based deformable medical image registration.基于深度学习的可变形医学图像配准综述。
Front Oncol. 2022 Dec 7;12:1047215. doi: 10.3389/fonc.2022.1047215. eCollection 2022.
5
UNIMODAL CYCLIC REGULARIZATION FOR TRAINING MULTIMODAL IMAGE REGISTRATION NETWORKS.用于训练多模态图像配准网络的单峰循环正则化
Proc IEEE Int Symp Biomed Imaging. 2021 Apr;2021. doi: 10.1109/isbi48211.2021.9433926. Epub 2021 May 25.
6
UNSUPERVISED MULTIMODAL IMAGE REGISTRATION WITH ADAPTATIVE GRADIENT GUIDANCE.基于自适应梯度引导的无监督多模态图像配准
Proc IEEE Int Conf Acoust Speech Signal Process. 2021 Jun;2021. doi: 10.1109/icassp39728.2021.9414320. Epub 2021 May 13.

本文引用的文献

1
Deep High-Resolution Representation Learning for Visual Recognition.用于视觉识别的深度高分辨率表征学习
IEEE Trans Pattern Anal Mach Intell. 2021 Oct;43(10):3349-3364. doi: 10.1109/TPAMI.2020.2983686. Epub 2021 Sep 2.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验