Gonzales Ricardo A, Zhang Qiang, Papież Bartłomiej W, Werys Konrad, Lukaschuk Elena, Popescu Iulia A, Burrage Matthew K, Shanmuganathan Mayooran, Ferreira Vanessa M, Piechnik Stefan K
Oxford Centre for Clinical Magnetic Resonance Research (OCMR), Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, United Kingdom.
Nuffield Department of Population Health, University of Oxford, Oxford, United Kingdom.
Front Cardiovasc Med. 2021 Nov 23;8:768245. doi: 10.3389/fcvm.2021.768245. eCollection 2021.
Quantitative cardiovascular magnetic resonance (CMR) T1 mapping has shown promise for advanced tissue characterisation in routine clinical practise. However, T1 mapping is prone to motion artefacts, which affects its robustness and clinical interpretation. Current methods for motion correction on T1 mapping are model-driven with no guarantee on generalisability, limiting its widespread use. In contrast, emerging data-driven deep learning approaches have shown good performance in general image registration tasks. We propose MOCOnet, a convolutional neural network solution, for generalisable motion artefact correction in T1 maps. The network architecture employs U-Net for producing distance vector fields and utilises warping layers to apply deformation to the feature maps in a coarse-to-fine manner. Using the UK Biobank imaging dataset scanned at 1.5T, MOCOnet was trained on 1,536 mid-ventricular T1 maps (acquired using the ShMOLLI method) with motion artefacts, generated by a customised deformation procedure, and tested on a different set of 200 samples with a diverse range of motion. MOCOnet was compared to a well-validated baseline multi-modal image registration method. Motion reduction was visually assessed by 3 human experts, with motion scores ranging from 0% (strictly no motion) to 100% (very severe motion). MOCOnet achieved fast image registration (<1 second per T1 map) and successfully suppressed a wide range of motion artefacts. MOCOnet significantly reduced motion scores from 37.1±21.5 to 13.3±10.5 ( < 0.001), whereas the baseline method reduced it to 15.8±15.6 ( < 0.001). MOCOnet was significantly better than the baseline method in suppressing motion artefacts and more consistently ( = 0.007). MOCOnet demonstrated significantly better motion correction performance compared to a traditional image registration approach. Salvaging data affected by motion with robustness and in a time-efficient manner may enable better image quality and reliable images for immediate clinical interpretation.
定量心血管磁共振(CMR)T1映射已显示出在常规临床实践中进行高级组织表征的潜力。然而,T1映射容易出现运动伪影,这会影响其稳健性和临床解读。当前用于T1映射运动校正的方法是模型驱动的,无法保证通用性,限制了其广泛应用。相比之下,新兴的数据驱动深度学习方法在一般图像配准任务中表现出良好性能。我们提出了MOCOnet,一种卷积神经网络解决方案,用于T1映射中通用的运动伪影校正。该网络架构采用U-Net生成距离矢量场,并利用扭曲层以粗到细的方式对特征图应用变形。使用在1.5T下扫描的英国生物银行成像数据集,MOCOnet在1536个具有运动伪影的心室中部T1映射(使用ShMOLLI方法获取)上进行训练,这些运动伪影由定制的变形程序生成,并在另一组包含各种运动的200个样本上进行测试。将MOCOnet与经过充分验证的基线多模态图像配准方法进行比较。由3名人类专家对运动减少情况进行视觉评估,运动分数范围从0%(完全无运动)到100%(非常严重的运动)。MOCOnet实现了快速图像配准(每个T1映射<1秒),并成功抑制了广泛的运动伪影。MOCOnet将运动分数从37.1±21.5显著降低至13.3±10.5(<0.001),而基线方法将其降低至15.8±15.6(<0.001)。在抑制运动伪影方面,MOCOnet明显优于基线方法,且更为一致(P = 0.007)。与传统图像配准方法相比,MOCOnet表现出明显更好的运动校正性能。以稳健且高效的方式挽救受运动影响的数据,可能会带来更好的图像质量和可靠的图像,以便立即进行临床解读。