Suppr超能文献

IE-CycleGAN:用于非配对 PET 图像增强的改进的循环一致对抗网络。

IE-CycleGAN: improved cycle consistent adversarial network for unpaired PET image enhancement.

机构信息

The Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou, China.

The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.

出版信息

Eur J Nucl Med Mol Imaging. 2024 Nov;51(13):3874-3887. doi: 10.1007/s00259-024-06823-6. Epub 2024 Jul 23.

Abstract

PURPOSE

Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images.

METHODS

In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets.

RESULTS

For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUV, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners.

CONCLUSION

The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability.

摘要

目的

仪器技术的进步极大地推动了正电子发射断层扫描(PET)扫描仪的发展。uEXPLORER 等最先进的 PET 扫描仪可以采集质量显著提高的 PET 图像。然而,由于制造和维护成本高,这些扫描仪目前在大多数当地医院都无法使用。我们的研究旨在将普通 PET 扫描仪采集的低质量 PET 图像转换为与最先进扫描仪获得的图像质量相当的图像,而无需使用配对的低质量和高质量 PET 图像。

方法

在本文中,我们提出了一种用于非配对 PET 图像增强的改进型 CycleGAN(IE-CycleGAN)模型。该方法基于 CycleGAN,添加了相关系数损失和患者特定先验损失来约束生成图像的结构。此外,我们定义了一种常规到高级的训练策略,以增强网络的泛化能力。该方法在非配对 uEXPLORER 数据集和 Biograph Vision 当地医院数据集上进行了验证。

结果

对于 uEXPLORER 数据集,与非局部均值滤波(NLM)、块匹配和 3D 滤波(BM3D)以及深度图像先验(DIP)相比,所提出的方法取得了更好的结果,与 U-Net(监督)和 CycleGAN(监督)相当。对于 Biograph Vision 当地医院数据集,与 NLM、BM3D 和 DIP 相比,所提出的方法实现了更高的对比度噪声比(CNR)和肿瘤-背景 SUVmax 比(TBR)。此外,与 U-Net(监督)和 CycleGAN(监督)相比,当应用于来自不同扫描仪的图像时,所提出的方法显示出更高的对比度、SUV 和 TBR。

结论

所提出的非配对 PET 图像增强方法优于 NLM、BM3D 和 DIP。此外,在本地医院数据集上实施时,它比 U-Net(监督)和 CycleGAN(监督)表现更好,这证明了它具有出色的泛化能力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验