Ge Yongshuai, Su Ting, Zhu Jiongtao, Deng Xiaolei, Zhang Qiyang, Chen Jianwei, Hu Zhanli, Zheng Hairong, Liang Dong
Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
Quant Imaging Med Surg. 2020 Feb;10(2):415-427. doi: 10.21037/qims.2019.12.12.
Recently, the paradigm of computed tomography (CT) reconstruction has shifted as the deep learning technique evolves. In this study, we proposed a new convolutional neural network (called ADAPTIVE-NET) to perform CT image reconstruction directly from a sinogram by integrating the analytical domain transformation knowledge.
In the proposed ADAPTIVE-NET, a specific network layer with constant weights was customized to transform the sinogram onto the CT image domain via analytical back-projection. With this new framework, feature extractions were performed simultaneously on both the sinogram domain and the CT image domain. The Mayo low dose CT (LDCT) data was used to validate the new network. In particular, the new network was compared with the previously proposed residual encoder-decoder (RED)-CNN network. For each network, the mean square error (MSE) loss with and without VGG-based perceptual loss was compared. Furthermore, to evaluate the image quality with certain metrics, the noise correlation was quantified via the noise power spectrum (NPS) on the reconstructed LDCT for each method.
CT images that have clinically relevant dimensions of 512×512 can be easily reconstructed from a sinogram on a single graphics processing unit (GPU) with moderate memory size (e.g., 11 GB) by ADAPTIVE-NET. With the same MSE loss function, the new network is able to generate better results than the RED-CNN. Moreover, the new network is able to reconstruct natural looking CT images with enhanced image quality if jointly using the VGG loss.
The newly proposed end-to-end supervised ADAPTIVE-NET is able to reconstruct high-quality LDCT images directly from a sinogram.
近年来,随着深度学习技术的发展,计算机断层扫描(CT)重建的模式发生了转变。在本研究中,我们提出了一种新的卷积神经网络(称为ADAPTIVE-NET),通过整合解析域变换知识直接从正弦图进行CT图像重建。
在所提出的ADAPTIVE-NET中,定制了一个具有恒定权重的特定网络层,通过解析反投影将正弦图变换到CT图像域。在这个新框架下,同时在正弦图域和CT图像域进行特征提取。使用梅奥低剂量CT(LDCT)数据来验证新网络。特别是,将新网络与先前提出的残差编码器-解码器(RED)-CNN网络进行了比较。对于每个网络,比较了有无基于VGG的感知损失时的均方误差(MSE)损失。此外,为了用某些指标评估图像质量,通过每种方法重建的LDCT上的噪声功率谱(NPS)对噪声相关性进行了量化。
ADAPTIVE-NET能够在单个具有中等内存大小(例如11GB)的图形处理单元(GPU)上从正弦图轻松重建出具有512×512临床相关尺寸的CT图像。在相同的MSE损失函数下,新网络能够产生比RED-CNN更好的结果。此外,如果联合使用VGG损失,新网络能够重建出具有更高图像质量且看起来自然的CT图像。
新提出的端到端监督的ADAPTIVE-NET能够直接从正弦图重建高质量的LDCT图像。