College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, 610064, China.
College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, 610064, China.
Comput Methods Programs Biomed. 2022 Apr;217:106676. doi: 10.1016/j.cmpb.2022.106676. Epub 2022 Feb 1.
Multi-modal medical images, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), have been widely used for the diagnosis of brain disorder diseases like Alzheimer's disease (AD) since they can provide various information. PET scans can detect cellular changes in organs and tissues earlier than MRI. Unlike MRI, PET data is difficult to acquire due to cost, radiation, or other limitations. Moreover, PET data is missing for many subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. To solve this problem, a 3D end-to-end generative adversarial network (named BPGAN) is proposed to synthesize brain PET from MRI scans, which can be used as a potential data completion scheme for multi-modal medical image research.
We propose BPGAN, which learns an end-to-end mapping function to transform the input MRI scans to their underlying PET scans. First, we design a 3D multiple convolution U-Net (MCU) generator architecture to improve the visual quality of synthetic results while preserving the diverse brain structures of different subjects. By further employing a 3D gradient profile (GP) loss and structural similarity index measure (SSIM) loss, the synthetic PET scans have higher-similarity to the ground truth. In this study, we explore alternative data partitioning ways to study their impact on the performance of the proposed method in different medical scenarios.
We conduct experiments on a publicly available ADNI database. The proposed BPGAN is evaluated by mean absolute error (MAE), peak-signal-to-noise-ratio (PSNR) and SSIM, superior to other compared models in these quantitative evaluation metrics. Qualitative evaluations also validate the effectiveness of our approach. Additionally, combined with MRI and our synthetic PET scans, the accuracies of multi-class AD diagnosis on dataset-A and dataset-B are 85.00% and 56.47%, which have been improved by about 1% and 1%, respectively, compared to the stand-alone MRI.
The experimental results of quantitative measures, qualitative displays, and classification evaluation demonstrate that the synthetic PET images by BPGAN are reasonable and high-quality, which provide complementary information to improve the performance of AD diagnosis. This work provides a valuable reference for multi-modal medical image analysis.
磁共振成像(MRI)和正电子发射断层扫描(PET)等多模态医学图像可提供各种信息,已广泛应用于阿尔茨海默病(AD)等脑部疾病的诊断。与 MRI 相比,PET 扫描可以更早地检测到器官和组织中的细胞变化。然而,由于成本、辐射或其他限制,PET 数据难以获取。此外,阿尔茨海默病神经影像学倡议(ADNI)数据集的许多受试者都缺少 PET 数据。为了解决这个问题,我们提出了一种 3D 端到端生成对抗网络(命名为 BPGAN),从 MRI 扫描中合成脑 PET,可作为多模态医学图像研究的潜在数据补全方案。
我们提出了 BPGAN,它学习了一个端到端的映射函数,将输入的 MRI 扫描转换为其潜在的 PET 扫描。首先,我们设计了一个 3D 多卷积 U-Net(MCU)生成器架构,以提高合成结果的视觉质量,同时保留不同受试者的不同大脑结构。通过进一步采用 3D 梯度轮廓(GP)损失和结构相似性指数度量(SSIM)损失,合成的 PET 扫描与真实值具有更高的相似度。在这项研究中,我们探索了替代的数据分区方式,以研究它们在不同医疗场景下对所提出方法性能的影响。
我们在一个公开的 ADNI 数据库上进行了实验。所提出的 BPGAN 通过平均绝对误差(MAE)、峰值信噪比(PSNR)和 SSIM 进行评估,在这些定量评估指标上优于其他比较模型。定性评估也验证了我们方法的有效性。此外,结合 MRI 和我们的合成 PET 扫描,数据集-A 和数据集-B 的多类 AD 诊断准确率分别提高了约 1%和 1%,达到了 85.00%和 56.47%。
定量测量、定性显示和分类评估的实验结果表明,BPGAN 生成的合成 PET 图像合理且质量高,可为提高 AD 诊断性能提供补充信息。这项工作为多模态医学图像分析提供了有价值的参考。