Suppr超能文献

基于上下文感知学习自动对焦度量的介入式锥束 CT 中的可变形运动补偿。

Deformable motion compensation in interventional cone-beam CT with a context-aware learned autofocus metric.

机构信息

Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.

Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.

出版信息

Med Phys. 2024 Jun;51(6):4158-4180. doi: 10.1002/mp.17125. Epub 2024 May 11.

Abstract

PURPOSE

Interventional Cone-Beam CT (CBCT) offers 3D visualization of soft-tissue and vascular anatomy, enabling 3D guidance of abdominal interventions. However, its long acquisition time makes CBCT susceptible to patient motion. Image-based autofocus offers a suitable platform for compensation of deformable motion in CBCT, but it relies on handcrafted motion metrics based on first-order image properties and that lack awareness of the underlying anatomy. This work proposes a data-driven approach to motion quantification via a learned, context-aware, deformable metric, , that quantifies the amount of motion degradation as well as the realism of the structural anatomical content in the image.

METHODS

The proposed was modeled as a deep convolutional neural network (CNN) trained to recreate a reference-based structural similarity metric-visual information fidelity (VIF). The deep CNN acted on motion-corrupted images, providing an estimation of the spatial VIF map that would be obtained against a motion-free reference, capturing motion distortion, and anatomic plausibility. The deep CNN featured a multi-branch architecture with a high-resolution branch for estimation of voxel-wise VIF on a small volume of interest. A second contextual, low-resolution branch provided features associated to anatomical context for disentanglement of motion effects and anatomical appearance. The deep CNN was trained on paired motion-free and motion-corrupted data obtained with a high-fidelity forward projection model for a protocol involving 120 kV and 9.90 mGy. The performance of was evaluated via metrics of correlation with ground truth and with the underlying deformable motion field in simulated data with deformable motion fields with amplitude ranging from 5 to 20 mm and frequency from 2.4 up to 4 cycles/scan. Robustness to variation in tissue contrast and noise levels was assessed in simulation studies with varying beam energy (90-120 kV) and dose (1.19-39.59 mGy). Further validation was obtained on experimental studies with a deformable phantom. Final validation was obtained via integration of on an autofocus compensation framework, applied to motion compensation on experimental datasets and evaluated via metric of spatial resolution on soft-tissue boundaries and sharpness of contrast-enhanced vascularity.

RESULTS

The magnitude and spatial map of showed consistent and high correlation levels with the ground truth in both simulation and real data, yielding average normalized cross correlation (NCC) values of 0.95 and 0.88, respectively. Similarly, achieved good correlation values with the underlying motion field, with average NCC of 0.90. In experimental phantom studies, properly reflects the change in motion amplitudes and frequencies: voxel-wise averaging of the local across the full reconstructed volume yielded an average value of 0.69 for the case with mild motion (2 mm, 12 cycles/scan) and 0.29 for the case with severe motion (12 mm, 6 cycles/scan). Autofocus motion compensation using resulted in noticeable mitigation of motion artifacts and improved spatial resolution of soft tissue and high-contrast structures, resulting in reduction of edge spread function width of 8.78% and 9.20%, respectively. Motion compensation also increased the conspicuity of contrast-enhanced vascularity, reflected in an increase of 9.64% in vessel sharpness.

CONCLUSION

The proposed , featuring a novel context-aware architecture, demonstrated its capacity as a reference-free surrogate of structural similarity to quantify motion-induced degradation of image quality and anatomical plausibility of image content. The validation studies showed robust performance across motion patterns, x-ray techniques, and anatomical instances. The proposed anatomy- and context-aware metric poses a powerful alternative to conventional motion estimation metrics, and a step forward for application of deep autofocus motion compensation for guidance in clinical interventional procedures.

摘要

目的

介入锥形束 CT(CBCT)提供了软组织和血管解剖结构的 3D 可视化,能够实现腹部介入的 3D 引导。然而,其较长的采集时间使得 CBCT 容易受到患者运动的影响。基于图像的自动对焦提供了一种合适的平台,可用于补偿 CBCT 中的变形运动,但它依赖于基于一阶图像属性的手工制作的运动度量,并且缺乏对潜在解剖结构的认识。这项工作提出了一种通过学习、上下文感知的可变形度量来进行运动量化的方法,该方法通过一个学习的、上下文感知的、可变形的度量 ,来量化运动退化的程度以及图像中结构解剖内容的逼真度。

方法

所提出的 被建模为一个深度卷积神经网络(CNN),旨在重建基于参考的结构相似性度量-视觉信息保真度(VIF)。深度 CNN 作用于运动失真的图像,提供了一个对运动自由参考图像的空间 VIF 图的估计,该图捕捉了运动失真和解剖学的逼真度。深度 CNN 采用了一种多分支结构,具有高分辨率分支,用于在小感兴趣体积上估计体素级别的 VIF。第二个上下文、低分辨率分支提供了与解剖上下文相关的特征,用于分离运动效果和解剖外观。深度 CNN 是在一个高保真正向投影模型的基础上,对运动自由和运动失真数据进行训练的,该模型的协议涉及到 120 kV 和 9.90 mGy 的剂量。通过与地面实况的相关性和在模拟数据中与潜在变形运动场的相关性来评估 的性能。在模拟研究中,评估了在不同的组织对比度和噪声水平下的稳健性,在模拟研究中,使用了不同的射线能量(90-120 kV)和剂量(1.19-39.59 mGy)。在具有变形的体模的实验研究中进一步验证了这一点。最终的验证是通过在自动对焦补偿框架上集成 来实现的,该框架应用于运动补偿,并通过对软组织边界的空间分辨率和对比度增强的血管的锐度的度量来进行评估。

结果

在模拟和真实数据中, 的幅度和空间图与地面实况具有一致的高相关性,分别产生了平均归一化交叉相关(NCC)值为 0.95 和 0.88。同样, 与潜在的运动场也有很好的相关性,平均 NCC 值为 0.90。在实验性的体模研究中, 可以正确地反映运动幅度和频率的变化:在整个重建体积上对局部 进行体素平均,对于轻度运动(2mm,12 个/扫描)的情况,平均为 0.69,对于严重运动(12mm,6 个/扫描)的情况,平均为 0.29。使用 进行自动对焦运动补偿,可以明显减轻运动伪影,并提高软组织和高对比度结构的空间分辨率,使边缘扩散函数的宽度分别减少了 8.78%和 9.20%。运动补偿还增加了对比度增强血管的可见度,反映在血管锐度增加了 9.64%。

结论

所提出的 ,具有新颖的上下文感知架构,证明了它作为结构相似性的无参考替代物的能力,可以量化运动引起的图像质量退化和图像内容的解剖学逼真度。验证研究表明,该方法在运动模式、X 射线技术和解剖实例方面具有稳健的性能。所提出的具有解剖学和上下文感知的度量方法为传统的运动估计度量方法提供了一种强大的替代方法,并为在临床介入程序中应用深度自动对焦运动补偿以提供指导迈出了一步。

相似文献

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验