Suppr超能文献

通过元学习进行多领域学习:在内循环学习中在多领域损失景观中采取最优步骤。

MULTI-DOMAIN LEARNING BY META-LEARNING: TAKING OPTIMAL STEPS IN MULTI-DOMAIN LOSS LANDSCAPES BY INNER-LOOP LEARNING.

作者信息

Sicilia Anthony, Zhao Xingchen, Minhas Davneet S, O'Connor Erin E, Aizenstein Howard J, Klunk William E, Tudorascu Dana L, Hwang Seong Jae

机构信息

Intelligent Systems Program - University of Pittsburgh.

Department of Computer Science, University of Pittsburgh.

出版信息

Proc IEEE Int Symp Biomed Imaging. 2021 Apr;2021:650-654. doi: 10.1109/ISBI48211.2021.9433977. Epub 2021 May 25.

Abstract

We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is , requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.

摘要

我们考虑一种与模型无关的解决方案,用于多模态应用中的多域学习(MDL)问题。许多现有的MDL技术都是依赖模型的解决方案,明确需要进行重大的架构更改来构建特定领域的模块。因此,将这些MDL技术正确应用于具有成熟模型的新问题,例如用于语义分割的U-Net,可能需要各种底层实现工作。在本文中,鉴于新兴的多模态数据(例如,各种结构神经成像模态),我们旨在通过纯算法实现MDL,以便广泛使用的神经网络能够以与模型无关的方式轻松实现MDL。为此,我们考虑一个加权损失函数,并通过采用来自最近活跃的学习学习(元学习)领域的技术将其扩展为一个有效的过程。具体来说,我们采取内循环梯度步骤来动态估计我们损失函数超参数的后验分布。因此,我们的方法是 ,不需要额外的模型参数,也不需要更改网络架构;相反,只需要进行一些有效的算法修改就可以提高MDL的性能。我们展示了我们在医学成像中的一个拟合问题的解决方案,具体来说,是在白质高信号(WMH)的自动分割中。我们研究了两种具有互补信息的神经成像模态(T1-MR和FLAIR)来拟合我们的问题。

相似文献

2
What and Where: Learn to Plug Adapters via NAS for Multidomain Learning.什么和哪里:通过 NAS 学习插头适配器进行多领域学习。
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6532-6544. doi: 10.1109/TNNLS.2021.3082316. Epub 2022 Oct 27.
3
ROBUST WHITE MATTER HYPERINTENSITY SEGMENTATION ON UNSEEN DOMAIN.在未知领域进行稳健的白质高信号分割
Proc IEEE Int Symp Biomed Imaging. 2021 Apr;2021:1047-1051. doi: 10.1109/ISBI48211.2021.9434034. Epub 2021 May 25.
6
Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis.Hi-Net:用于多模态磁共振图像合成的混合融合网络。
IEEE Trans Med Imaging. 2020 Sep;39(9):2772-2781. doi: 10.1109/TMI.2020.2975344. Epub 2020 Feb 20.

本文引用的文献

2
Learning without Forgetting.学过不忘。
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2935-2947. doi: 10.1109/TPAMI.2017.2773081. Epub 2017 Nov 14.
5
Statistical normalization techniques for magnetic resonance imaging.用于磁共振成像的统计归一化技术。
Neuroimage Clin. 2014 Aug 15;6:9-19. doi: 10.1016/j.nicl.2014.08.008. eCollection 2014.
6
N4ITK: improved N3 bias correction.N4ITK:改进的 N3 偏置校正。
IEEE Trans Med Imaging. 2010 Jun;29(6):1310-20. doi: 10.1109/TMI.2010.2046908. Epub 2010 Apr 8.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验