Department of Biomedical Engineering, University of California, Davis, California, USA.
Canon Medical Research USA, Inc., Vernon Hills, Illinois, USA.
Med Phys. 2021 Sep;48(9):5244-5258. doi: 10.1002/mp.15051. Epub 2021 Jul 28.
The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction.
We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance.
Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast.
The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.
PET/CT 和 PET/MR 扫描仪的发展为利用解剖学信息来提高 PET 图像质量提供了机会。在本文中,我们提出了一种新颖的联合学习三维(3D)卷积神经网络(CNN),从 PET/CT 图像对中提取模态特定的特征,并将互补特征集成到迭代重建框架中,以改善 PET 图像重建。
我们使用预先训练的深度神经网络来表示 PET 图像。该网络使用低计数 PET 和 CT 图像对作为输入,高计数 PET 图像作为标签进行训练。然后,该网络被纳入约束最大似然框架中,以正则化 PET 图像重建。研究了两种不同的网络结构来整合来自 CT 图像的解剖学信息。一种是多通道 CNN,将 PET 和 CT 体素视为输入的单独通道。另一种是多分支 CNN,为 PET 和 CT 图像实现了单独的编码器,以提取潜在特征,并将组合的潜在特征输入解码器。使用基于计算机的蒙特卡罗模拟和两个真实患者数据集,将所提出的方法与现有的方法进行了比较,包括最大似然期望最大化(MLEM)重建、基于核的重建以及基于 CNN 的深度惩罚方法,有无解剖学指导。
重建图像表明,所提出的约束最大似然重建方法产生的图像质量优于竞争方法。与基于 CNN 的深度惩罚重建相比,肺部肿瘤的对比度更高。通过整合解剖学信息,图像质量进一步提高。此外,在所提出的方法中,肝脏的标准偏差低于所有竞争方法在匹配病变对比度下的标准偏差。
监督联合学习策略可以提高约束最大似然重建的性能。与现有的技术相比,所提出的方法产生了更好的病变对比与背景标准偏差的折衷曲线,这可能有助于提高病变检测。