Suppr超能文献

用于 2D/3D 配准和射影空间变换的全可微框架。

A Fully Differentiable Framework for 2D/3D Registration and the Projective Spatial Transformers.

出版信息

IEEE Trans Med Imaging. 2024 Jan;43(1):275-285. doi: 10.1109/TMI.2023.3299588. Epub 2024 Jan 2.

Abstract

Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.

摘要

基于图像的 2D/3D 配准是透视引导手术干预的关键技术。由于手工制作的图像相似性函数存在局部最小值,传统的基于强度的 2D/3D 配准方法的捕获范围有限。在这项工作中,我们旨在通过一个完全可微分的深度网络框架来扩展 2D/3D 配准的捕获范围,该框架学习近似凸形状相似性函数。该网络使用一种新颖的投影空间变换(ProST)模块,该模块在 3D 位姿参数方面具有独特的可微性,并且使用创新的双反向梯度驱动损失函数进行训练。我们比较了文献中最流行的基于学习的位姿回归方法,并使用成熟的 CMAES 基于强度的配准作为基准。我们报告了注册位姿误差、目标注册误差(TRE)和成功率(SR),以 TRE 均值为 10mm 的阈值。对于骨盆解剖结构,ProST 紧随 CMAES 之后的 TRE 中位数为 4.4mm,模拟中的 SR 为 65.6%,真实数据中的 SR 为 73.2%。不使用 ProST 配准的 CMAES 的 SR 分别为模拟和真实数据中的 28.5%和 36.0%。我们的结果表明,所提出的 ProST 网络学习了一种实用的相似性函数,极大地扩展了传统基于强度的 2D/3D 配准的捕获范围。我们相信,ProST 的独特可微特性有可能有益于相关的 3D 医学成像研究应用。源代码可在 https://github.com/gaocong13/Projective-Spatial-Transformers 上获得。

相似文献

2

本文引用的文献

1
Fluoroscopy-Guided Robotic System for Transforaminal Lumbar Epidural Injections.用于经椎间孔腰椎硬膜外注射的透视引导机器人系统
IEEE Trans Med Robot Bionics. 2022 Nov;4(4):901-909. doi: 10.1109/tmrb.2022.3196321. Epub 2022 Aug 4.
10
Fiducial-Free 2D/3D Registration for Robot-Assisted Femoroplasty.用于机器人辅助股骨成形术的无基准二维/三维配准
IEEE Trans Med Robot Bionics. 2020 Aug;2(3):437-446. doi: 10.1109/tmrb.2020.3012460. Epub 2020 Jul 28.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验