Zotova Daria, Pinon Nicolas, Trombetta Robin, Bouet Romain, Jung Julien, Lartizien Carole
INSA Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69621, France.
Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Univ Lyon 1, Bron, 69500, France.
Comput Methods Programs Biomed. 2025 Jun;265:108727. doi: 10.1016/j.cmpb.2025.108727. Epub 2025 Mar 31.
Research in the cross-modal medical image translation domain has been very productive over the past few years in tackling the scarce availability of large curated multi-modality datasets with the promising performance of GAN-based architectures. However, only a few of these studies assessed task-based related performance of these synthetic data, especially for the training of deep models.
We design and compare different GAN-based frameworks for generating synthetic brain[18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data. We first perform standard qualitative and quantitative visual quality evaluation. Then, we explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model designed to detect subtle epilepsy lesions in T1 MRI and FDG PET images. We introduce novel diagnostic task-oriented quality metrics of the synthetic FDG PET data tailored to our unsupervised detection task, then use these fake data to train a use case UAD model combining a deep representation learning based on siamese autoencoders with a OC-SVM density support estimation model. This model is trained on normal subjects only and allows the detection of any variation from the pattern of the normal population. We compare the detection performance of models trained on 35 paired real MR T1 of normal subjects paired either on 35 true PET images or on 35 synthetic PET images generated from the best performing generative models. Performance analysis is conducted on 17 exams of epilepsy patients undergoing surgery.
The best performing GAN-based models allow generating realistic fake PET images of control subject with SSIM and PSNR values around 0.9 and 23.8, respectively and in distribution (ID) with regard to the true control dataset. The best UAD model trained on these synthetic normative PET data allows reaching 74% sensitivity.
Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models. We also demonstrate the diagnostic value of these synthetic data for the training of UAD models and evaluation on clinical exams of epilepsy patients. Our code and the normative image dataset are available.
在过去几年中,跨模态医学图像翻译领域的研究成果丰硕,通过基于生成对抗网络(GAN)架构的出色性能,解决了大型经过整理的多模态数据集稀缺的问题。然而,这些研究中只有少数评估了这些合成数据基于任务的相关性能,特别是对于深度模型的训练。
我们设计并比较了不同的基于GAN的框架,用于从T1加权MRI数据生成合成脑[18F]氟脱氧葡萄糖(FDG)PET图像。我们首先进行标准的定性和定量视觉质量评估。然后,我们探讨使用这些伪PET数据训练深度无监督异常检测(UAD)模型的进一步影响,该模型旨在检测T1 MRI和FDG PET图像中的细微癫痫病变。我们引入了针对我们的无监督检测任务量身定制的合成FDG PET数据的新型面向诊断任务的质量指标,然后使用这些伪数据训练一个用例UAD模型,该模型将基于连体自动编码器的深度表征学习与OC-SVM密度支持估计模型相结合。该模型仅在正常受试者上进行训练,并允许检测与正常人群模式的任何差异。我们比较了在35对正常受试者的真实MR T1上训练的模型的检测性能,这些T1分别与35张真实PET图像或由性能最佳的生成模型生成的35张合成PET图像配对。对17例接受手术的癫痫患者的检查进行了性能分析。
性能最佳的基于GAN的模型能够生成对照受试者的逼真伪PET图像,结构相似性指数(SSIM)和峰值信噪比(PSNR)值分别约为0.9和23.8,并且在分布上与真实对照数据集一致。在这些合成标准PET数据上训练的最佳UAD模型的灵敏度达到74%。
我们的结果证实,基于GAN的模型最适合从MR T1到FDG PET的翻译,优于变压器或扩散模型。我们还证明了这些合成数据在训练UAD模型和对癫痫患者临床检查评估方面的诊断价值。我们的代码和标准图像数据集可供使用。