Suppr超能文献

使用深度卷积神经网络减轻锥束计算机断层扫描中运动引起的伪影。

Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks.

作者信息

Amirian Mohammadreza, Montoya-Zegarra Javier A, Herzig Ivo, Eggenberger Hotz Peter, Lichtensteiger Lukas, Morf Marco, Züst Alexander, Paysan Pascal, Peterlik Igor, Scheib Stefan, Füchslin Rudolf Marcel, Stadelmann Thilo, Schilling Frank-Peter

机构信息

Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland.

Institute of Neural Information Processing, Ulm University, Ulm, Germany.

出版信息

Med Phys. 2023 Oct;50(10):6228-6242. doi: 10.1002/mp.16405. Epub 2023 Apr 11.

Abstract

BACKGROUND

Cone beam computed tomography (CBCT) is often employed on radiation therapy treatment devices (linear accelerators) used in image-guided radiation therapy (IGRT). For each treatment session, it is necessary to obtain the image of the day in order to accurately position the patient and to enable adaptive treatment capabilities including auto-segmentation and dose calculation. Reconstructed CBCT images often suffer from artifacts, in particular those induced by patient motion. Deep-learning based approaches promise ways to mitigate such artifacts.

PURPOSE

We propose a novel deep-learning based approach with the goal to reduce motion induced artifacts in CBCT images and improve image quality. It is based on supervised learning and includes neural network architectures employed as pre- and/or post-processing steps during CBCT reconstruction.

METHODS

Our approach is based on deep convolutional neural networks which complement the standard CBCT reconstruction, which is performed either with the analytical Feldkamp-Davis-Kress (FDK) method, or with an iterative algebraic reconstruction technique (SART-TV). The neural networks, which are based on refined U-net architectures, are trained end-to-end in a supervised learning setup. Labeled training data are obtained by means of a motion simulation, which uses the two extreme phases of 4D CT scans, their deformation vector fields, as well as time-dependent amplitude signals as input. The trained networks are validated against ground truth using quantitative metrics, as well as by using real patient CBCT scans for a qualitative evaluation by clinical experts.

RESULTS

The presented novel approach is able to generalize to unseen data and yields significant reductions in motion induced artifacts as well as improvements in image quality compared with existing state-of-the-art CBCT reconstruction algorithms (up to +6.3 dB and +0.19 improvements in peak signal-to-noise ratio, PSNR, and structural similarity index measure, SSIM, respectively), as evidenced by validation with an unseen test dataset, and confirmed by a clinical evaluation on real patient scans (up to 74% preference for motion artifact reduction over standard reconstruction).

CONCLUSIONS

For the first time, it is demonstrated, also by means of clinical evaluation, that inserting deep neural networks as pre- and post-processing plugins in the existing 3D CBCT reconstruction and trained end-to-end yield significant improvements in image quality and reduction of motion artifacts.

摘要

背景

锥形束计算机断层扫描(CBCT)常用于图像引导放射治疗(IGRT)中的放射治疗设备(直线加速器)。对于每个治疗疗程,都需要获取当日图像,以便准确对患者进行定位,并实现包括自动分割和剂量计算在内的自适应治疗功能。重建的CBCT图像常常存在伪影,尤其是由患者运动引起的伪影。基于深度学习的方法有望减轻此类伪影。

目的

我们提出一种基于深度学习的新方法,目标是减少CBCT图像中由运动引起的伪影并提高图像质量。该方法基于监督学习,包括在CBCT重建过程中用作预处理和/或后处理步骤的神经网络架构。

方法

我们的方法基于深度卷积神经网络,它对标准CBCT重建进行补充,标准CBCT重建可采用解析的费尔德坎普-戴维斯-克雷斯(FDK)方法或迭代代数重建技术(SART-TV)进行。基于改进的U-net架构的神经网络在监督学习设置中进行端到端训练。通过运动模拟获得标记的训练数据,该运动模拟使用4D CT扫描的两个极端相位、它们的变形矢量场以及随时间变化的幅度信号作为输入。使用定量指标以及通过使用真实患者的CBCT扫描进行临床专家的定性评估,根据地面真值对训练好的网络进行验证。

结果

所提出的新方法能够推广到未见数据,与现有的最先进CBCT重建算法相比,在减少由运动引起的伪影以及提高图像质量方面有显著效果(峰值信噪比PSNR分别提高高达+6.3 dB,结构相似性指数SSIM提高+0.19),这由对未见测试数据集的验证证明,并通过对真实患者扫描的临床评估得到证实(与标准重建相比,对减少运动伪影的偏好高达74%)。

结论

首次通过临床评估证明,在现有的3D CBCT重建中插入深度神经网络作为预处理和后处理插件并进行端到端训练,可显著提高图像质量并减少运动伪影。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验