Siemens Medical Solutions USA, Inc., 810 Innovation Drive, Knoxville, TN, 37932, USA.
Joint Department of Medical Imaging, Princess Margaret Cancer Centre, Mount Sinai Hospital and Women's College Hospital, University of Toronto, University Health Network, 610 University Ave, Toronto, Ontario, M5G 2M9, Canada.
Eur J Nucl Med Mol Imaging. 2021 Nov;48(12):3817-3826. doi: 10.1007/s00259-021-05413-0. Epub 2021 May 22.
Artificial intelligence (AI) algorithms based on deep convolutional networks have demonstrated remarkable success for image transformation tasks. State-of-the-art results have been achieved by generative adversarial networks (GANs) and training approaches which do not require paired data. Recently, these techniques have been applied in the medical field for cross-domain image translation.
This study investigated deep learning transformation in medical imaging. It was motivated to identify generalizable methods which would satisfy the simultaneous requirements of quality and anatomical accuracy across the entire human body. Specifically, whole-body MR patient data acquired on a PET/MR system were used to generate synthetic CT image volumes. The capacity of these synthetic CT data for use in PET attenuation correction (AC) was evaluated and compared to current MR-based attenuation correction (MR-AC) methods, which typically use multiphase Dixon sequences to segment various tissue types.
This work aimed to investigate the technical performance of a GAN system for general MR-to-CT volumetric transformation and to evaluate the performance of the generated images for PET AC. A dataset comprising matched, same-day PET/MR and PET/CT patient scans was used for validation.
A combination of training techniques was used to produce synthetic images which were of high-quality and anatomically accurate. Higher correlation was found between the values of mu maps calculated directly from CT data and those derived from the synthetic CT images than those from the default segmented Dixon approach. Over the entire body, the total amounts of reconstructed PET activities were similar between the two MR-AC methods, but the synthetic CT method yielded higher accuracy for quantifying the tracer uptake in specific regions.
The findings reported here demonstrate the feasibility of this technique and its potential to improve certain aspects of attenuation correction for PET/MR systems. Moreover, this work may have larger implications for establishing generalized methods for inter-modality, whole-body transformation in medical imaging. Unsupervised deep learning techniques can produce high-quality synthetic images, but additional constraints may be needed to maintain medical integrity in the generated data.
基于深度卷积网络的人工智能(AI)算法在图像转换任务中取得了显著的成功。生成对抗网络(GAN)和不需要配对数据的训练方法已经取得了最先进的结果。最近,这些技术已经应用于医学领域的跨域图像翻译。
本研究调查了医学成像中的深度学习转换。其目的是确定能够满足全身质量和解剖准确性的通用方法。具体来说,使用在 PET/MR 系统上获得的全身 MR 患者数据生成合成 CT 图像体积。评估了这些合成 CT 数据在 PET 衰减校正(AC)中的使用能力,并与当前基于 MR 的衰减校正(MR-AC)方法进行了比较,后者通常使用多相位 Dixon 序列对各种组织类型进行分割。
本工作旨在研究用于通用 MR-to-CT 容积转换的 GAN 系统的技术性能,并评估生成图像在 PET AC 中的性能。使用包含同一天 PET/MR 和 PET/CT 患者扫描的匹配数据集进行验证。
使用了多种训练技术来生成高质量和解剖准确的合成图像。从 CT 数据直接计算的 mu 图值与从合成 CT 图像得出的值之间的相关性高于从默认分割 Dixon 方法得出的值。在整个身体中,两种 MR-AC 方法之间重建的 PET 活动总量相似,但合成 CT 方法在量化特定区域的示踪剂摄取方面具有更高的准确性。
这里报告的结果证明了该技术的可行性及其在提高 PET/MR 系统衰减校正某些方面的潜力。此外,这项工作可能对建立医学成像中跨模态、全身转换的通用方法具有更大的意义。无监督深度学习技术可以生成高质量的合成图像,但在生成的数据中可能需要额外的约束来保持医学完整性。