Wang Yan, Luo Yanmei, Zu Chen, Zhan Bo, Jiao Zhengyang, Wu Xi, Zhou Jiliu, Shen Dinggang, Zhou Luping
School of Computer Science, Sichuan University, Chengdu, China.
Department of Risk Controlling Research, JD.COM, China.
Med Image Anal. 2024 Jan;91:102983. doi: 10.1016/j.media.2023.102983. Epub 2023 Oct 4.
Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.
正电子发射断层扫描(PET)可以揭示细胞异常的代谢活动,并为临床患者诊断提供有利信息。一般来说,标准剂量PET(SPET)图像比低剂量PET(LPET)图像包含更多诊断信息,但更高剂量的扫描也会带来更高的潜在辐射风险。为了在获取高质量PET图像的同时降低辐射风险,在本文中,我们提出了一种三维多模态边缘感知Transformer生成对抗网络(Transformer-GAN),用于使用来自磁共振成像(T1-MRI)的相应LPET图像和T1采集数据进行高质量SPET重建。具体而言,为了充分挖掘LPET中的代谢分布和T1-MRI中的解剖结构信息,我们首先使用两个单独的基于卷积神经网络(CNN)的编码器分别从这两种模态中提取局部空间特征,并设计一个多模态特征集成模块,鉴于不同位置特征的不同贡献,有效地整合这两种特征。然后,由于CNN能够很好地描述局部空间信息,但在对图像中的长程依赖性进行建模时存在困难,我们进一步应用基于Transformer的编码器来提取输入图像中的全局语义信息,并使用CNN解码器将编码后的特征转换为SPET图像。最后,应用基于块的判别器来确保重建图像和真实图像之间逐块数据分布的相似性。考虑到解剖结构中的边缘信息对临床疾病诊断的重要性,除了体素级估计误差和对抗损失外,我们还引入了边缘感知损失,以在重建的SPET图像中保留更多边缘细节信息。在体模数据集和临床数据集上的实验验证了我们提出的方法能够有效地重建高质量的SPET图像,并且在定性和定量指标方面优于当前的最先进方法。