Suppr超能文献

基于观察者损失函数的卷积神经网络用于低剂量 CT 去噪。

Low-dose CT denoising via convolutional neural network with an observer loss function.

机构信息

School of Integrated Technology and Yonsei Institute of Convergence Technology, Yonsei University, Incheon, South Korea.

出版信息

Med Phys. 2021 Oct;48(10):5727-5742. doi: 10.1002/mp.15161. Epub 2021 Aug 25.

Abstract

PURPOSE

Convolutional neural network (CNN)-based denoising is an effective method for reducing complex computed tomography (CT) noise. However, the image blur induced by denoising processes is a major concern. The main source of image blur is the pixel-level loss (e.g., mean squared error [MSE] and mean absolute error [MAE]) used to train a CNN denoiser. To reduce the image blur, feature-level loss is utilized to train a CNN denoiser. A CNN denoiser trained using visual geometry group (VGG) loss can preserve the small structures, edges, and texture of the image.However, VGG loss, derived from an ImageNet-pretrained image classifier, is not optimal for training a CNN denoiser for CT images. ImageNet contains natural RGB images, so the features extracted by the ImageNet-pretrained model cannot represent the characteristics of CT images that are highly correlated with diagnosis. Furthermore, a CNN denoiser trained with VGG loss causes bias in CT number. Therefore, we propose to use a binary classification network trained using CT images as a feature extractor and newly define the feature-level loss as observer loss.

METHODS

As obtaining labeled CT images for training classification network is difficult, we create labels by inserting simulated lesions. We conduct two separate classification tasks, signal-known-exactly (SKE) and signal-known-statistically (SKS), and define the corresponding feature-level losses as SKE loss and SKS loss, respectively. We use SKE loss and SKS loss to train CNN denoiser.

RESULTS

Compared to pixel-level losses, a CNN denoiser trained using observer loss (i.e., SKE loss and SKS loss) is effective in preserving structure, edge, and texture. Observer loss also resolves the bias in CT number, which is a problem of VGG loss. Comparing observer losses using SKE and SKS tasks, SKS yields images having a more similar noise structure to reference images.

CONCLUSIONS

Using observer loss for training CNN denoiser is effective to preserve structure, edge, and texture in denoised images and prevent the CT number bias. In particular, when using SKS loss, denoised images having a similar noise structure to reference images are generated.

摘要

目的

基于卷积神经网络(CNN)的去噪是降低复杂计算机断层扫描(CT)噪声的有效方法。然而,去噪过程引起的图像模糊是一个主要关注点。图像模糊的主要来源是用于训练 CNN 去噪器的像素级损失(例如均方误差[MSE]和平均绝对误差[MAE])。为了减少图像模糊,可以利用特征级损失来训练 CNN 去噪器。使用视觉几何组(VGG)损失训练的 CNN 去噪器可以保留图像的小结构、边缘和纹理。然而,VGG 损失源自经过 ImageNet 预训练的图像分类器,对于训练用于 CT 图像的 CNN 去噪器来说并不是最佳选择。ImageNet 包含自然 RGB 图像,因此经过 ImageNet 预训练的模型提取的特征无法代表与诊断高度相关的 CT 图像的特征。此外,使用 VGG 损失训练的 CNN 去噪器会导致 CT 数的偏差。因此,我们建议使用经过 CT 图像训练的二进制分类网络作为特征提取器,并重新定义特征级损失为观测器损失。

方法

由于获取用于训练分类网络的带标签 CT 图像很困难,我们通过插入模拟病变来创建标签。我们进行了两个单独的分类任务,信号已知精确(SKE)和信号已知统计(SKS),并分别将相应的特征级损失定义为 SKE 损失和 SKS 损失。我们使用 SKE 损失和 SKS 损失来训练 CNN 去噪器。

结果

与像素级损失相比,使用观测器损失(即 SKE 损失和 SKS 损失)训练的 CNN 去噪器在保留结构、边缘和纹理方面更有效。观测器损失还解决了 VGG 损失存在的 CT 数偏差问题。比较使用 SKE 和 SKS 任务的观测器损失,SKS 生成的图像与参考图像具有更相似的噪声结构。

结论

使用观测器损失训练 CNN 去噪器可以有效地保留去噪图像中的结构、边缘和纹理,并防止 CT 数偏差。特别是,使用 SKS 损失时,可以生成与参考图像具有相似噪声结构的去噪图像。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验