IEEE Trans Med Imaging. 2021 May;40(5):1417-1427. doi: 10.1109/TMI.2021.3056678. Epub 2021 Apr 30.
In clinics, the information about the appearance and location of brain tumors is essential to assist doctors in diagnosis and treatment. Automatic brain tumor segmentation on the images acquired by magnetic resonance imaging (MRI) is a common way to attain this information. However, MR images are not quantitative and can exhibit significant variation in signal depending on a range of factors, which increases the difficulty of training an automatic segmentation network and applying it to new MR images. To deal with this issue, this paper proposes to learn a sample-adaptive intensity lookup table (LuT) that dynamically transforms the intensity contrast of each input MR image to adapt to the following segmentation task. Specifically, the proposed deep SA-LuT-Net framework consists of a LuT module and a segmentation module, trained in an end-to-end manner: the LuT module learns a sample-specific nonlinear intensity mapping function through communication with the segmentation module, aiming at improving the final segmentation performance. In order to make the LuT learning sample-adaptive, we parameterize the intensity mapping function by exploring two families of non-linear functions (i.e., piece-wise linear and power functions) and predict the function parameters for each given sample. These sample-specific parameters make the intensity mapping adaptive to samples. We develop our SA-LuT-Nets separately based on two backbone networks for segmentation, i.e., DMFNet and the modified 3D Unet, and validate them on BRATS2018 and BRATS2019 datasets for brain tumor segmentation. Our experimental results clearly demonstrate the superior performance of the proposed SA-LuT-Nets using either single or multiple MR modalities. It not only significantly improves the two baselines (DMFNet and the modified 3D Unet), but also wins a set of state-of-the-art segmentation methods. Moreover, we show that, the LuTs learnt using one segmentation model could also be applied to improving the performance of another segmentation model, indicating the general segmentation information captured by LuTs.
在临床中,有关脑肿瘤外观和位置的信息对于辅助医生进行诊断和治疗至关重要。自动对磁共振成像(MRI)获取的图像进行脑肿瘤分割是获取此类信息的常用方法。然而,MRI 图像不是定量的,并且其信号会因一系列因素而发生显著变化,这增加了训练自动分割网络并将其应用于新 MRI 图像的难度。为了解决这个问题,本文提出学习样本自适应强度查找表(LuT),该表可以动态转换每个输入 MRI 图像的强度对比度,以适应后续的分割任务。具体来说,所提出的深度 SA-LuT-Net 框架由 LuT 模块和分割模块组成,以端到端的方式进行训练:LuT 模块通过与分割模块进行通信,学习特定于样本的非线性强度映射函数,旨在提高最终分割性能。为了使 LuT 学习具有样本自适应能力,我们通过探索两种非线性函数族(即分段线性和幂函数)来参数化强度映射函数,并预测每个给定样本的函数参数。这些特定于样本的参数使强度映射适应样本。我们分别基于两个分割骨干网络(即 DMFNet 和修改后的 3D Unet)开发了我们的 SA-LuT-Nets,并在 BRATS2018 和 BRATS2019 数据集上对脑肿瘤分割进行了验证。我们的实验结果清楚地表明,所提出的 SA-LuT-Nets 在使用单一或多种 MRI 模态时具有优越的性能。它不仅显著提高了两个基线(DMFNet 和修改后的 3D Unet)的性能,而且还赢得了一系列最先进的分割方法。此外,我们还表明,使用一种分割模型学习的 LuTs 也可以应用于提高另一种分割模型的性能,这表明 LuTs 捕获了一般的分割信息。