Department of Physics, University of Texas at Arlington, TX 76019, United States of America.
Department of Radiology, Mayo Clinic, Rochester, MN 55905, United States of America.
Biomed Phys Eng Express. 2022 Nov 4;8(6). doi: 10.1088/2057-1976/ac9da7.
Computed tomography (CT) is widely used to diagnose many diseases. Low-dose CT has been actively pursued to lower the ionizing radiation risk. A relatively smoother kernel is typically used in low-dose CT to suppress image noise, which may sacrifice spatial resolution. In this work, we propose a texture transformer network to simultaneously reduce image noise and improve spatial resolution in CT images. This network, referred to as Texture Transformer for Super Resolution (TTSR), is a reference-based deep-learning image super-resolution method built upon a generative adversarial network (GAN). The noisy low-resolution CT (LRCT) image and the routine-dose high-resolution (HRCT) image are severed as the query and key in a transformer, respectively. Image translation is optimized through deep neural network (DNN) texture extraction, correlation embedding, and attention-based texture transfer and synthesis to achieve joint feature learning between LRCT and HRCT images for super-resolution CT (SRCT) images. To evaluate SRCT performance, we use the data from both simulations of the XCAT phantom program and the real patient data. Peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and feature similarity (FSIM) index are used as quantitative metrics. For comparison of SRCT performance, the cubic spline interpolation, SRGAN (a GAN super-resolution with an additional content loss), and GAN-CIRCLE (a GAN super-resolution with cycle consistency) were used. Compared to the other two methods, TTSR can restore more details in SRCT images and achieve better PSNR, SSIM, and FSIM for both simulation and real-patient data. In addition, we show that TTSR can yield better image quality and demand much less computation time than high-resolution low-dose CT images denoised by block-matching and 3D filtering (BM3D) and GAN-CIRCLE. In summary, the proposed TTSR method based on texture transformer and attention mechanism provides an effective and efficient tool to improve spatial resolution and suppress noise of low-dose CT images.
计算机断层扫描(CT)广泛用于诊断许多疾病。为降低电离辐射风险,人们积极研究低剂量 CT。在低剂量 CT 中,通常使用相对平滑的核来抑制图像噪声,但这可能会牺牲空间分辨率。在这项工作中,我们提出了一种纹理变换网络,以在 CT 图像中同时降低图像噪声并提高空间分辨率。该网络称为纹理变换用于超分辨率(TTSR),是一种基于生成对抗网络(GAN)的基于参考的深度学习图像超分辨率方法。有噪声的低剂量 CT(LRCT)图像和常规剂量高分辨率(HRCT)图像分别作为变换的查询和关键。通过深度神经网络(DNN)纹理提取、相关性嵌入和基于注意力的纹理传递和合成来优化图像转换,以实现 LRCT 和 HRCT 图像之间的联合特征学习,从而实现超分辨率 CT(SRCT)图像。为了评估 SRCT 性能,我们使用 XCAT 体模程序的模拟数据和真实患者数据。使用峰值信噪比(PSNR)、结构相似性指数度量(SSIM)和特征相似性(FSIM)指数作为定量指标。为了比较 SRCT 性能,我们使用了三次样条插值、SRGAN(具有附加内容损失的 GAN 超分辨率)和 GAN-CIRCLE(具有循环一致性的 GAN 超分辨率)。与其他两种方法相比,TTSR 可以在 SRCT 图像中恢复更多细节,并在模拟和真实患者数据中获得更好的 PSNR、SSIM 和 FSIM。此外,我们还表明,与基于块匹配和 3D 滤波(BM3D)和 GAN-CIRCLE 的高分辨率低剂量 CT 图像去噪相比,TTSR 可以产生更好的图像质量并需要更少的计算时间。总之,基于纹理变换和注意力机制的 TTSR 方法为提高低剂量 CT 图像的空间分辨率和抑制噪声提供了一种有效且高效的工具。