Interdisciplinary Nanoscience Center (iNANO) and Department of Chemistry, Aarhus University, DK-8000 Aarhus C, Denmark.
J Magn Reson. 2012 Oct;223:164-9. doi: 10.1016/j.jmr.2012.07.002. Epub 2012 Jul 20.
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
我们提出了一种使用快速傅里叶变换(FT)的分析算法,用于推导梯度,这是使用 Hyberts 和 Wagner [J. Am. Chem. Soc. 129 (2007) 5108]的向前最大熵重建(FM)程序对稀疏采样数据集进行迭代重建的一部分。原始算法的主要缺点是,它需要对每个缺失数据点进行一次 FT 和一次熵评估,以建立梯度。在本研究中,我们证明仅使用两个 FT 和一次熵导数评估即可获得整个梯度,与原始程序相比,这可以显著节省时间。例如:一个具有间接维度稀疏采样的 2D 数据集,仅对 512 个复数点中的 75 个进行采样(采样率为 15%),每个 ν(2)切片将缺少 (512-75)×2=874 个点。原始 FM 算法需要 874 次 FT 和熵函数评估来设置梯度,而本算法在这种情况下快约 450 倍,因为它仅需要两次 FT。这使得计算时间从几个小时减少到不到一分钟。对于 3D 数据集的 2D 重建,甚至可以实现更显著的时间节省,其中原始算法在高性能计算集群上需要数天的 CPU 时间,而使用新算法在常规笔记本电脑上仅需几分钟的计算时间。