Physikalisch-Technische Bundesanstalt, Braunschweig and Berlin, Germany.
Division of Imaging Sciences and Biomedical Engineering, King's College London, London, United Kingdom.
Phys Med Biol. 2021 Apr 23;66(9). doi: 10.1088/1361-6560/abf278.
In this work, we consider the task of image reconstruction in 2D radial cardiac cine MRI using deep learning (DL)-based regularization. As the regularization is achieved by employing an image-prior predicted by a pre-trained convolutional neural network (CNN), the quality of the image-prior is of essential importance. The achievable performance of any DL-based method is limited by the amount and the quality of the available training data. For fast dynamic processes, obtaining good-quality MR data is challenging because of technical and physiological reasons. In this work, we try to overcome these problems by a transfer-learning approach which is motivated by a previously presented DL-method (XT,YT U-Net). There, instead of training the network on the whole 2D dynamic images, it is trained on 2D spatio-temporal profiles (,-slices) which show the temporal changes of the imaged object. Therefore, for the training and test data, it is more important that their spatio-temporal profiles share similar local features rather than being images of the same anatomy. This allows us to equip arbitrary data with simulated motion that resembles the cardiac motion and use it as training data. By doing so, it is possible to train a CNN which is applicable to cardiac cine MR data without using ground-truth cine MR images for training. We demonstrate that combining XT,YT U-Net with the proposed transfer-learning strategy delivers comparable performance to CNNs trained on cardiac cine MR images and in some cases even qualitatively surpasses these. Additionally, the transfer-learning strategy was investigated for a 2D and 3D U-Net. The images processed by the the CNNs were used as image-priors in the CNN-regularized iterative reconstruction. The XT,YT U-Net yielded visibly better results than the 2D U-Net and slightly better results than the 3D U-Net when used in combination with the presented transfer learning-strategy.
在这项工作中,我们考虑使用基于深度学习(DL)的正则化方法进行 2D 径向心脏电影 MRI 的图像重建任务。由于正则化是通过使用由预先训练的卷积神经网络(CNN)预测的图像先验来实现的,因此图像先验的质量至关重要。任何基于 DL 的方法的可实现性能都受到可用训练数据的数量和质量的限制。对于快速动态过程,由于技术和生理原因,获得高质量的 MR 数据具有挑战性。在这项工作中,我们尝试通过一种迁移学习方法来克服这些问题,该方法受到之前提出的基于 DL 的方法(XT,YT U-Net)的启发。在那里,网络不是在整个 2D 动态图像上进行训练,而是在显示所成像物体的时间变化的 2D 时空剖面(-切片)上进行训练。因此,对于训练和测试数据,更重要的是它们的时空剖面共享相似的局部特征,而不是相同解剖结构的图像。这允许我们用模拟运动装备任意数据,这些模拟运动类似于心脏运动,并将其用作训练数据。通过这样做,有可能训练一个可应用于心脏电影磁共振数据的 CNN,而无需使用心脏电影磁共振图像的真实数据进行训练。我们证明,将 XT,YT U-Net 与所提出的迁移学习策略结合使用,可以提供与在心脏电影磁共振图像上训练的 CNN 相当的性能,并且在某些情况下甚至可以在质量上超过这些性能。此外,还研究了 2D 和 3D U-Net 的迁移学习策略。CNN 处理的图像被用作 CNN 正则化迭代重建中的图像先验。当与提出的迁移学习策略结合使用时,XT,YT U-Net 产生的结果明显优于 2D U-Net,略优于 3D U-Net。