IEEE Trans Med Imaging. 2023 Sep;42(9):2577-2591. doi: 10.1109/TMI.2023.3261707. Epub 2023 Aug 31.
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
多对比度磁共振成像(MRI)在临床实践中被广泛应用,因为每种对比都提供了互补的信息。然而,每种成像对比在不同患者中的可用性可能不同,这给放射科医生和自动图像分析算法带来了挑战。一种解决这个问题的通用方法是缺失数据插补,它旨在从现有数据中合成缺失的对比度。虽然已经提出了几种基于卷积神经网络(CNN)的算法,但它们存在 CNN 模型的基本限制,例如需要固定数量的输入和输出通道、无法捕获长距离依赖关系以及缺乏可解释性。在这项工作中,我们将缺失数据插补问题表述为序列到序列学习问题,并提出了一种多对比度多尺度 Transformer(MMT),它可以接受输入对比度的任意子集,并合成缺失的对比度。MMT 由一个多尺度 Transformer 编码器组成,该编码器结合了一个多尺度 Transformer 解码器,以粗到精的方式构建输入的分层表示并生成输出。所提出的多对比度 Swin Transformer 块可以有效地捕获内部和对比度之间的依赖关系,从而实现准确的图像合成。此外,MMT 本质上是可解释的,因为它允许我们通过分析解码器中 Transformer 块的内置注意力图来了解每个输入对比度在不同区域的重要性。在两个大规模多对比度 MRI 数据集上的广泛实验表明,MMT 在定量和定性上都优于最先进的方法。