Suppr超能文献

基于解缠表示转换的增量嵌入学习

Incremental Embedding Learning With Disentangled Representation Translation.

作者信息

Wei Kun, Chen Da, Li Yuhong, Yang Xu, Deng Cheng, Tao Dacheng

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3821-3833. doi: 10.1109/TNNLS.2022.3199816. Epub 2024 Feb 29.

Abstract

Humans are capable of accumulating knowledge by sequentially learning different tasks, while neural networks fail to achieve this due to catastrophic forgetting problems. Most current incremental learning methods focus more on tackling catastrophic forgetting for traditional classification networks. Notably, however, embedding networks that are basic architectures for many metric learning applications also suffer from this problem. Moreover, the most significant difficulty for continual embedding networks is that the relationships between the latent features and prototypes of previous tasks will be destroyed once new tasks have been learned. Accordingly, we propose a novel incremental method for embedding networks, called the disentangled representation translation (DRT) method, to obtain the discriminative class-disentangled features without reusing any samples of previous tasks and while avoiding the perturbation of task-related information. Next, a mask-guided module is specifically explored to adaptively change or retain the valuable information of latent features. This module enables us to effectively preserve the discriminative yet representative features in the disentangled translation process. In addition, DRT can easily be equipped with a regularization item of incremental learning to further improve performance. We conduct extensive experiments on four popular datasets; as the experimental results clearly demonstrate, our method can effectively alleviate the catastrophic forgetting problem for embedding networks.

摘要

人类能够通过依次学习不同任务来积累知识,而神经网络由于灾难性遗忘问题无法做到这一点。当前大多数增量学习方法更多地关注解决传统分类网络的灾难性遗忘问题。然而,值得注意的是,作为许多度量学习应用的基本架构的嵌入网络也存在这个问题。此外,持续嵌入网络最显著的困难在于,一旦学习了新任务,先前任务的潜在特征与原型之间的关系就会被破坏。因此,我们提出了一种用于嵌入网络的新颖增量方法,称为解缠表示转换(DRT)方法,以获得有判别力的类解缠特征,而无需重用先前任务的任何样本,同时避免任务相关信息的扰动。接下来,专门探索了一个掩码引导模块,以自适应地改变或保留潜在特征的有价值信息。该模块使我们能够在解缠转换过程中有效地保留有判别力且具代表性的特征。此外,DRT可以很容易地配备增量学习的正则化项以进一步提高性能。我们在四个流行数据集上进行了广泛实验;实验结果清楚地表明,我们的方法可以有效缓解嵌入网络的灾难性遗忘问题。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验