Bi Wanqing, Xv Jianan, Song Mengdie, Hao Xiaohan, Gao Dayong, Qi Fulang
The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China.
Fuqing Medical Co., Ltd., Hefei, Anhui, China.
Front Neurosci. 2023 Jun 20;17:1202143. doi: 10.3389/fnins.2023.1202143. eCollection 2023.
Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently updated with limited data from the target domain. However, the direct full-weight update strategy can pose the risk of "catastrophic forgetting" and overfitting, hindering its effectiveness. The goal of this study is to develop a zero-weight update transfer strategy to preserve pre-trained generic knowledge and reduce overfitting.
Based on the commonality between the source and target domains, we assume a linear transformation relationship of the optimal model weights from the source domain to the target domain. Accordingly, we propose a novel transfer strategy, linear fine-tuning (LFT), which introduces scaling and shifting (SS) factors into the pre-trained model. In contrast to FT, LFT only updates SS factors in the transfer phase, while the pre-trained weights remain fixed.
To evaluate the proposed LFT, we designed three different transfer scenarios and conducted a comparative analysis of FT, LFT, and other methods at various sampling rates and data volumes. In the transfer scenario between different contrasts, LFT outperforms typical transfer strategies at various sampling rates and considerably reduces artifacts on reconstructed images. In transfer scenarios between different slice directions or anatomical structures, LFT surpasses the FT method, particularly when the target domain contains a decreasing number of training images, with a maximum improvement of up to 2.06 dB (5.89%) in peak signal-to-noise ratio.
The LFT strategy shows great potential to address the issues of "catastrophic forgetting" and overfitting in transfer scenarios for MRI reconstruction, while reducing the reliance on the amount of data in the target domain. Linear fine-tuning is expected to shorten the development cycle of reconstruction models for adapting complicated clinical scenarios, thereby enhancing the clinical applicability of deep MRI reconstruction.
微调(FT)是一种广泛应用于基于深度学习的磁共振成像(MRI)重建的迁移学习方法。在这种方法中,重建模型使用来自具有丰富数据的源域的预训练权重进行初始化,随后使用来自目标域的有限数据进行更新。然而,直接的全权重更新策略可能会带来“灾难性遗忘”和过拟合的风险,从而阻碍其有效性。本研究的目标是开发一种零权重更新迁移策略,以保留预训练的通用知识并减少过拟合。
基于源域和目标域之间的共性,我们假设从源域到目标域的最优模型权重存在线性变换关系。据此,我们提出了一种新颖的迁移策略,即线性微调(LFT),它将缩放和偏移(SS)因子引入到预训练模型中。与FT不同,LFT在迁移阶段仅更新SS因子,而预训练权重保持不变。
为了评估所提出的LFT,我们设计了三种不同的迁移场景,并在各种采样率和数据量下对FT、LFT和其他方法进行了对比分析。在不同对比度之间的迁移场景中,LFT在各种采样率下均优于典型的迁移策略,并显著减少了重建图像上的伪影。在不同切片方向或解剖结构之间的迁移场景中,LFT超过了FT方法,特别是当目标域中的训练图像数量减少时,峰值信噪比最多可提高2.06 dB(5.89%)。
LFT策略在解决MRI重建迁移场景中的“灾难性遗忘”和过拟合问题方面显示出巨大潜力,同时减少了对目标域数据量的依赖。线性微调有望缩短适应复杂临床场景的重建模型的开发周期,从而提高深度MRI重建的临床适用性。