Sicilia Anthony, Zhao Xingchen, Minhas Davneet S, O'Connor Erin E, Aizenstein Howard J, Klunk William E, Tudorascu Dana L, Hwang Seong Jae
Intelligent Systems Program - University of Pittsburgh.
Department of Computer Science, University of Pittsburgh.
Proc IEEE Int Symp Biomed Imaging. 2021 Apr;2021:650-654. doi: 10.1109/ISBI48211.2021.9433977. Epub 2021 May 25.
We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is , requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.
我们考虑一种与模型无关的解决方案,用于多模态应用中的多域学习(MDL)问题。许多现有的MDL技术都是依赖模型的解决方案,明确需要进行重大的架构更改来构建特定领域的模块。因此,将这些MDL技术正确应用于具有成熟模型的新问题,例如用于语义分割的U-Net,可能需要各种底层实现工作。在本文中,鉴于新兴的多模态数据(例如,各种结构神经成像模态),我们旨在通过纯算法实现MDL,以便广泛使用的神经网络能够以与模型无关的方式轻松实现MDL。为此,我们考虑一个加权损失函数,并通过采用来自最近活跃的学习学习(元学习)领域的技术将其扩展为一个有效的过程。具体来说,我们采取内循环梯度步骤来动态估计我们损失函数超参数的后验分布。因此,我们的方法是 ,不需要额外的模型参数,也不需要更改网络架构;相反,只需要进行一些有效的算法修改就可以提高MDL的性能。我们展示了我们在医学成像中的一个拟合问题的解决方案,具体来说,是在白质高信号(WMH)的自动分割中。我们研究了两种具有互补信息的神经成像模态(T1-MR和FLAIR)来拟合我们的问题。