Suppr超能文献

基于深度强化学习的低剂量 X 射线计算机断层扫描的有限参数去噪。

Limited parameter denoising for low-dose X-ray computed tomography using deep reinforcement learning.

机构信息

Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany.

CT Concepts, Siemens Healthineers AG, Forchheim, Germany.

出版信息

Med Phys. 2022 Jul;49(7):4540-4553. doi: 10.1002/mp.15643. Epub 2022 Apr 21.

Abstract

BACKGROUND

The use of deep learning has successfully solved several problems in the field of medical imaging. Deep learning has been applied to the CT denoising problem successfully. However, the use of deep learning requires large amounts of data to train deep convolutional networks (CNNs). Moreover, due to the large parameter count, such deep CNNs may cause unexpected results.

PURPOSE

In this study, we introduce a novel CT denoising framework, which has interpretable behavior and provides useful results with limited data.

METHODS

We employ bilateral filtering in both the projection and volume domains to remove noise. To account for nonstationary noise, we tune the σ parameters of the volume for every projection view and every volume pixel. The tuning is carried out by two deep CNNs. Due to the impracticality of labeling, the two-deep CNNs are trained via a Deep-Q reinforcement learning task. The reward for the task is generated by using a custom reward function represented by a neural network. Our experiments were carried out on abdominal scans for the Mayo Clinic the cancer imaging archive (TCIA) dataset and the American association of physicists in medicine (AAPM) Low Dose CT Grand Challenge.

RESULTS

Our denoising framework has excellent denoising performance increasing the peak signal to noise ratio (PSNR) from 28.53 to 28.93 and increasing the structural similarity index (SSIM) from 0.8952 to 0.9204. We outperform several state-of-the-art deep CNNs, which have several orders of magnitude higher number of parameters (p-value [PSNR] = 0.000, p-value [SSIM] = 0.000). Our method does not introduce any blurring, which is introduced by mean squared error (MSE) loss-based methods, or any deep learning artifacts, which are introduced by wasserstein generative adversarial network (WGAN)-based models. Our ablation studies show that parameter tuning and using our reward network results in the best possible results.

CONCLUSIONS

We present a novel CT denoising framework, which focuses on interpretability to deliver good denoising performance, especially with limited data. Our method outperforms state-of-the-art deep neural networks. Future work will be focused on accelerating our method and generalizing it to different geometries and body parts.

摘要

背景

深度学习已成功解决医学影像领域的多个问题。深度学习已成功应用于 CT 去噪问题。然而,深度学习的应用需要大量数据来训练深度卷积神经网络(CNN)。此外,由于参数数量庞大,这种深度 CNN 可能会导致意外的结果。

目的

在本研究中,我们引入了一种新颖的 CT 去噪框架,该框架具有可解释的行为,并可在有限的数据下提供有用的结果。

方法

我们在投影域和体积域中均使用双边滤波来去除噪声。为了考虑非平稳噪声,我们针对每个投影视图和每个体积像素调整体积的 σ 参数。调整由两个深度 CNN 完成。由于标记的不切实际性,两个深度 CNN 通过深度 Q 强化学习任务进行训练。任务的奖励由自定义奖励函数生成,该函数由神经网络表示。我们的实验是在 Mayo 诊所癌症影像档案(TCIA)数据集和美国医学物理学家协会(AAPM)低剂量 CT 大挑战的腹部扫描上进行的。

结果

我们的去噪框架具有出色的去噪性能,将峰值信噪比(PSNR)从 28.53 提高到 28.93,将结构相似性指数(SSIM)从 0.8952 提高到 0.9204。与参数数量高出几个数量级的几种最先进的深度 CNN 相比,我们的方法表现出色(PSNR 的 p 值 [PSNR] = 0.000,SSIM 的 p 值 [SSIM] = 0.000)。我们的方法不会引入均方误差(MSE)损失方法引入的任何模糊,也不会引入基于 Wasserstein 生成对抗网络(WGAN)的模型引入的任何深度学习伪影。我们的消融研究表明,参数调整和使用我们的奖励网络可获得最佳结果。

结论

我们提出了一种新颖的 CT 去噪框架,该框架侧重于可解释性,以提供良好的去噪性能,尤其是在数据有限的情况下。我们的方法优于最先进的深度神经网络。未来的工作将集中在加速我们的方法并将其推广到不同的几何形状和身体部位。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验