Konwer Aishik, Hu Xiaoling, Bae Joseph, Xu Xuan, Chen Chao, Prasanna Prateek
Department of Computer Science, Stony Brook University.
Department of Biomedical Informatics, Stony Brook University.
Proc IEEE Int Conf Comput Vis. 2023 Oct;2023:21358-21368. doi: 10.1109/iccv51070.2023.01958.
In medical vision, different imaging modalities provide complementary information. However, in practice, not all modalities may be available during inference or even training. Previous approaches, e.g., knowledge distillation or image synthesis, often assume the availability of full modalities for all subjects during training; this is unrealistic and impractical due to the variability in data collection across sites. We propose a novel approach to learn enhanced modality-agnostic representations by employing a meta-learning strategy in training, even when only limited full modality samples are available. Meta-learning enhances partial modality representations to full modality representations by meta-training on partial modality data and meta-testing on limited full modality samples. Additionally, we co-supervise this feature enrichment by introducing an auxiliary adversarial learning branch. More specifically, a missing modality detector is used as a discriminator to mimic the full modality setting. Our segmentation framework significantly outperforms state-of-the-art brain tumor segmentation techniques in missing modality scenarios.
在医学视觉中,不同的成像模态提供互补信息。然而,在实际应用中,并非所有模态在推理甚至训练过程中都可用。先前的方法,如知识蒸馏或图像合成,通常假设在训练期间所有受试者都能获得完整的模态;由于各站点数据收集的差异,这既不现实也不实用。我们提出了一种新颖的方法,即使只有有限的完整模态样本可用,也能在训练中通过采用元学习策略来学习增强的模态无关表示。元学习通过对部分模态数据进行元训练,并对有限的完整模态样本进行元测试,将部分模态表示增强为完整模态表示。此外,我们通过引入辅助对抗学习分支来共同监督这种特征丰富过程。更具体地说,一个缺失模态检测器被用作鉴别器,以模拟完整模态设置。在缺失模态场景中,我们的分割框架显著优于当前最先进的脑肿瘤分割技术。