Suppr超能文献

基于迭代相关目标的全展开深度学习PET图像重建的内存高效训练

Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets.

作者信息

Corda-D'Incan Guillaume, Schnabel Julia A, Reader Andrew J

机构信息

School of Biomedical Engineering and Imaging Sciences, Department of Biomedical Engineering, King's College London, St. Thomas' Hospital, London, UK.

出版信息

IEEE Trans Radiat Plasma Med Sci. 2022 May;6(5):552-563. doi: 10.1109/TRPMS.2021.3101947. Epub 2021 Aug 2.

Abstract

We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customised regularisation at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularised reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularisation step as a leapfrogging acceleration. The results obtained on 2D and 3D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimisation of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalisation capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularisation allowed a slight reduction in reconstruction error compared to using a fixed regularisation network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network's performance compared to conventional training with a full backpropagation through the entire network.

摘要

我们提出了一种新的前向-后向分裂期望最大化网络(FBSEM-Net)版本,以及一种新的内存高效训练方法,该方法能够训练3D FBSEM-Net的完全展开实现。FBSEM-Net展开了最大期望最大化算法,并用残差卷积神经网络取代了正则化步骤。先验梯度和正则化强度均从训练数据中学习。在这个新的实现中,对原始框架进行了三处修改。第一,使用依赖于迭代的网络在每次迭代时进行定制正则化。第二,引入依赖于迭代的目标和损失,以便正则化重建在每次迭代时都与无噪声数据的重建相匹配。第三,进行顺序训练,使大型展开网络的训练在内存方面更高效且可行。由于顺序训练允许展开大量迭代,因此无需人为地将正则化步骤用作跨越式加速。在2D和3D模拟数据上获得的结果表明,使用依赖于迭代的目标和损失的FBSEM-Net在不同训练运行中提高了网络参数优化的一致性。我们还发现,使用依赖于迭代的目标提高了网络的泛化能力。此外,与在每次迭代时使用固定正则化网络相比,使用依赖于迭代的正则化的展开网络在重建误差上略有降低。最后,我们证明顺序训练成功解决了深度展开网络训练期间潜在的严重内存问题。特别是,通过将内存使用量与传统的端到端训练相比减少多达98%,它能够训练以前不可行的3D完全展开FBSEM-Net。我们还注意到,与通过整个网络进行完整反向传播的传统训练相比,反向传播的截断(由于顺序训练)对网络性能没有显著影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83c0/7612803/91ca54e09e04/EMS144690-f001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验