Suppr超能文献

IConDiffNet:一种用于医学图像配准的无监督逆一致微分同胚网络。

IConDiffNet: an unsupervised inverse-consistent diffeomorphic network for medical image registration.

作者信息

Liao Rui, F Williamson Jeffrey, Xia Tianyu, Ge Tao, A O'Sullivan Joseph

机构信息

Washington University in St. Louis, Saint Louis, MO 63130, United States of America.

Peking University, Beijing, People's Republic of China.

出版信息

Phys Med Biol. 2025 Feb 20;70(5). doi: 10.1088/1361-6560/ada516.

Abstract

Deformable image registration (DIR) is critical in many medical imaging applications. Diffeomorphic transformations, which are smooth invertible mappings with smooth inverses preserve topological properties and are an anatomically plausible means of constraining the solution space in many settings. Traditional iterative optimization-based diffeomorphic DIR algorithms are computationally costly and are not able to consistently resolve large and complicated deformations in medical image registration. Convolutional neural network implementations can rapidly estimate the transformation in through a pre-trained model. However, the structure design of most neural networks for DIR fails to systematically enforce diffeomorphism and inverse consistency. In this paper, a novel unsupervised neural network structure is proposed to perform a fast, accurate, and inverse-consistent diffeomorphic DIR.This paper introduces a novel unsupervised inverse-consistent diffeomorphic registration network termed IConDiffNet, which incorporates an energy constraint that minimizes the total energy expended during the deformation process. The IConDiffNet architecture consists of two symmetric paths, each employing multiple recursive cascaded updating blocks (neural networks) to handle different virtual time steps parameterizing the path from the initial undeformed image to the final deformation. These blocks estimate velocities corresponding to specific time steps, generating a series of smooth time-dependent velocity vector fields. Simultaneously, the inverse transformations are estimated by corresponding blocks in the inverse path. By integrating these series of time-dependent velocity fields from both paths, optimal forward and inverse transformations are obtained, aligning the image pair in both directions.Our proposed method was evaluated on a three-dimensional inter-patient image registration task with a large-scale brain MRI image dataset containing 375 subjects. The proposed IConDiffNet achieves fast and accurate DIR with better DSC, lower Hausdorff distance metric, and lower total energy spent during the deformation in the test dataset compared to competing state-of-the-art deep-learning diffeomorphic DIR approaches. Visualization shows that IConDiffNet produces more complicated transformations that better align structures than the VoxelMorph-Diff, SYMNet, and ANTs-SyN methods.The proposed IConDiffNet represents an advancement in unsupervised deep-learning-based DIR approaches. By ensuring inverse consistency and diffeomorphic properties in the outcome transformations, IConDiffNet offers a pathway for improved registration accuracy, particularly in clinical settings where diffeomorphic properties are crucial. Furthermore, the generality of IConDiffNet's network structure supports direct extension to diverse 3D image registration challenges. This adaptability is facilitated by the flexibility of the objective function used in optimizing the network, which can be tailored to suit different registration tasks.

摘要

可变形图像配准(DIR)在许多医学成像应用中至关重要。微分同胚变换是具有光滑逆映射的光滑可逆映射,它保留拓扑性质,并且在许多情况下是约束解空间的一种符合解剖学原理的方法。传统的基于迭代优化的微分同胚DIR算法计算成本高昂,并且在医学图像配准中无法始终如一地解决大而复杂的变形问题。卷积神经网络实现可以通过预训练模型快速估计变换。然而,大多数用于DIR的神经网络的结构设计未能系统地强制实现微分同胚和逆一致性。在本文中,提出了一种新颖的无监督神经网络结构来执行快速、准确且逆一致的微分同胚DIR。本文介绍了一种新颖的无监督逆一致微分同胚配准网络,称为IConDiffNet,它包含一个能量约束,可最小化变形过程中消耗的总能量。IConDiffNet架构由两条对称路径组成,每条路径都采用多个递归级联更新块(神经网络)来处理从初始未变形图像到最终变形的路径参数化的不同虚拟时间步长。这些块估计对应于特定时间步长的速度,生成一系列平滑的随时间变化的速度矢量场。同时,逆变换由逆路径中的相应块估计。通过整合来自两条路径的这一系列随时间变化的速度场,获得最优的正向和逆变换,在两个方向上对齐图像对。我们提出的方法在一个三维患者间图像配准任务上进行了评估,该任务使用了一个包含375名受试者的大规模脑MRI图像数据集。与竞争的最新深度学习微分同胚DIR方法相比,所提出的IConDiffNet在测试数据集中实现了快速准确的DIR,具有更好的DSC、更低的豪斯多夫距离度量和更低的变形过程中消耗的总能量。可视化显示,与VoxelMorph-Diff、SYMNet和ANTs-SyN方法相比,IConDiffNet产生的变换更复杂,结构对齐更好。所提出的IConDiffNet代表了基于无监督深度学习的DIR方法的一项进步。通过确保结果变换中的逆一致性和微分同胚性质,IConDiffNet提供了提高配准精度的途径,特别是在微分同胚性质至关重要的临床环境中。此外,IConDiffNet网络结构的通用性支持直接扩展到各种3D图像配准挑战。这种适应性得益于优化网络时使用的目标函数的灵活性,该函数可以进行定制以适应不同的配准任务。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验