Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Centre Léon Bérard, F-69373, Lyon, France.
Phys Med Biol. 2018 Nov 22;63(23):235001. doi: 10.1088/1361-6560/aaeaf2.
Over the last decade, dual-energy CT scanners have gone from prototypes to clinically available machines, and spectral photon counting CT scanners are following. They require a specific reconstruction process, consisting of two steps: material decomposition and tomographic reconstruction. Image-based methods perform reconstruction, then decomposition, while projection-based methods perform decomposition first, and then reconstruction. As an alternative, 'one-step inversion' methods have been proposed, which perform decomposition and reconstruction simultaneously. Unfortunately, one-step methods are typically slower than their two-step counterparts, and in most CT applications, reconstruction time is critical. This paper therefore proposes to compare the convergence speeds of five one-step algorithms. We adapted all these algorithms to solve the same problem: spectral photon-counting CT reconstruction from five energy bins, using a three materials decomposition basis and spatial regularization. The paper compares a Bayesian method which uses non-linear conjugate gradient for minimization (Cai et al 2013 Med. Phys. 40 111916-31), three methods based on quadratic surrogates (Long and Fessler 2014 IEEE Trans. Med. Imaging 33 1614-26, Weidinger et al 2016 Int. J. Biomed. Imaging 2016 1-15, Mechlem et al 2018 IEEE Trans. Med. Imaging 37 68-80), and a primal-dual method based on MOCCA, a modified Chambolle-Pock algorithm (Barber et al 2016 Phys. Med. Biol. 61 3784). Some of these methods have been accelerated by using μ-preconditioning, i.e. by performing all internal computations not with the actual materials the object is made of, but with carefully chosen linear combinations of those. In this paper, we also evaluated the impact of three different μ-preconditioners on convergence speed. Our experiments on simulated data revealed vast differences in the number of iterations required to reach a common image quality objective: Mechlem et al (2018 IEEE Trans. Med. Imaging 37 68-80) needed ten iterations, Cai et al (2013 Med. Phys. 40 111916-31), Long and Fessler (2014 IEEE Trans. Med. Imaging 33 1614-26) and Weidinger et al (2016 Int. J. Biomed. Imaging 2016 1-15) several hundreds, and Barber et al (2016 Phys. Med. Biol. 61 3784) several thousands. We also sum up other practical aspects, like memory footprint and the need to tune extra parameters.
在过去的十年中,双能 CT 扫描仪已经从原型机发展到临床可用的机器,光谱光子计数 CT 扫描仪也紧随其后。它们需要特定的重建过程,包括两个步骤:材料分解和层析重建。基于图像的方法先进行重建,然后进行分解,而基于投影的方法则先进行分解,然后再进行重建。作为一种替代方法,已经提出了“一步反演”方法,它可以同时进行分解和重建。不幸的是,一步法通常比两步法慢,在大多数 CT 应用中,重建时间是关键。因此,本文提出了比较五种一步算法的收敛速度。我们将所有这些算法都进行了调整,以解决相同的问题:从五个能量-bin 进行光谱光子计数 CT 重建,使用三种材料分解基和空间正则化。本文比较了一种贝叶斯方法,该方法使用非线性共轭梯度法进行最小化(Cai 等人,2013 年《医学物理学》第 40 卷第 111916-31 页),三种基于二次近似的方法(Long 和 Fessler,2014 年《IEEE 医学成像汇刊》第 33 卷第 1614-26 页,Weidinger 等人,2016 年《国际生物医学成像杂志》2016 年第 1-15 页,Mechlem 等人,2018 年《IEEE 医学成像汇刊》第 37 卷第 68-80 页),以及一种基于 MOCCA 的原始对偶方法,一种改进的 Chambolle-Pock 算法(Barber 等人,2016 年《物理医学与生物学》第 61 卷第 3784 页)。这些方法中的一些已经通过使用 μ-预处理来加速,即不是使用物体实际组成的材料进行所有内部计算,而是使用精心选择的这些材料的线性组合进行计算。在本文中,我们还评估了三种不同的 μ-预处理对收敛速度的影响。我们在模拟数据上的实验表明,达到共同图像质量目标所需的迭代次数有很大差异:Mechlem 等人(2018 年《IEEE 医学成像汇刊》第 37 卷第 68-80 页)需要十次迭代,Cai 等人(2013 年《医学物理学》第 40 卷第 111916-31 页)、Long 和 Fessler(2014 年《IEEE 医学成像汇刊》第 33 卷第 1614-26 页)和 Weidinger 等人(2016 年《国际生物医学成像杂志》2016 年第 1-15 页)需要数百次迭代,而 Barber 等人(2016 年《物理医学与生物学》第 61 卷第 3784 页)则需要数千次迭代。我们还总结了其他实际方面,例如内存占用和需要调整额外的参数。