Suppr超能文献

使用深度学习实现 1 毫米分辨率临床 PET 系统的自归一化。

Self-normalization for a 1 mmresolution clinical PET system using deep learning.

机构信息

Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America.

Department of Radiology, Stanford University, Stanford, CA, United States of America.

出版信息

Phys Med Biol. 2024 Aug 14;69(17). doi: 10.1088/1361-6560/ad69fb.

Abstract

This work proposes, for the first time, an image-based end-to-end self-normalization framework for positron emission tomography (PET) using conditional generative adversarial networks (cGANs).We evaluated different approaches by exploring each of the following three methodologies. First, we used images that were either unnormalized or corrected for geometric factors, which encompass all time-invariant factors, as input data types. Second, we set the input tensor shape as either a single axial slice (2D) or three contiguous axial slices (2.5D). Third, we chose either Pix2Pix or polarized self-attention (PSA) Pix2Pix, which we developed for this work, as a deep learning network. The targets for all approaches were the axial slices of images normalized using the direct normalization method. We performed Monte Carlo simulations of ten voxelized phantoms with the SimSET simulation tool and produced 26,000 pairs of axial image slices for training and testing.The results showed that 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the best performance among all the methods we tested. All approaches improved general image quality figures of merit peak signal to noise ratio (PSNR) and structural similarity index (SSIM) from ∼15 % to ∼55 %, and 2.5D PSA Pix2Pix showed the highest PSNR (28.074) and SSIM (0.921). Lesion detectability, measured with region of interest (ROI) PSNR, SSIM, normalized contrast recovery coefficient, and contrast-to-noise ratio, was generally improved for all approaches, and 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the highest ROI PSNR (28.920) and SSIM (0.973).This study demonstrates the potential of an image-based end-to-end self-normalization framework using cGANs for improving PET image quality and lesion detectability without the need for separate normalization scans.

摘要

本文首次提出了一种基于条件生成对抗网络(cGANs)的正电子发射断层扫描(PET)图像端到端自归一化框架。我们通过探索以下三种方法中的每一种来评估不同的方法。首先,我们使用未经归一化或校正几何因素的图像作为输入数据类型,这些因素包含所有时不变因素。其次,我们将输入张量形状设置为单个轴向切片(2D)或三个连续的轴向切片(2.5D)。第三,我们选择 Pix2Pix 或偏振自注意(PSA)Pix2Pix 作为深度学习网络,后者是我们为本工作开发的。所有方法的目标都是使用直接归一化方法归一化的轴向图像切片。我们使用 SimSET 模拟工具对十个体素化幻影进行了蒙特卡罗模拟,并为训练和测试生成了 26000 对轴向图像切片。结果表明,在我们测试的所有方法中,使用校正几何因素的输入图像训练的 2.5D PSA Pix2Pix 表现最佳。所有方法都提高了一般图像质量指标的峰值信噪比(PSNR)和结构相似性指数(SSIM),从约 15%提高到约 55%,而 2.5D PSA Pix2Pix 显示出最高的 PSNR(28.074)和 SSIM(0.921)。使用感兴趣区域(ROI)PSNR、SSIM、归一化对比度恢复系数和对比度噪声比测量的病灶检测能力通常会因所有方法而提高,而使用校正几何因素的输入图像训练的 2.5D PSA Pix2Pix 则实现了最高的 ROI PSNR(28.920)和 SSIM(0.973)。本研究表明,使用 cGANs 的基于图像的端到端自归一化框架具有改善 PET 图像质量和病灶检测能力的潜力,而无需单独的归一化扫描。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验