• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自增强:用于小样本学习的未见类别的深度网络泛化。

Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning.

机构信息

Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul, 02841, Republic of Korea.

Department of Artificial Intelligence, Korea University, Anam-dong, Seongbuk-gu, Seoul, 02841, Republic of Korea.

出版信息

Neural Netw. 2021 Jun;138:140-149. doi: 10.1016/j.neunet.2021.02.007. Epub 2021 Feb 17.

DOI:10.1016/j.neunet.2021.02.007
PMID:33652370
Abstract

Few-shot learning aims to classify unseen classes with a few training examples. While recent works have shown that standard mini-batch training with carefully designed training strategies can improve generalization ability for unseen classes, well-known problems in deep networks such as memorizing training statistics have been less explored for few-shot learning. To tackle this issue, we propose self-augmentation that consolidates self-mix and self-distillation. Specifically, we propose a regional dropout technique called self-mix, in which a patch of an image is substituted into other values in the same image. With this dropout effect, we show that the generalization ability of deep networks can be improved as it prevents us from learning specific structures of a dataset. Then, we employ a backbone network that has auxiliary branches with its own classifier to enforce knowledge sharing. This sharing of knowledge forces each branch to learn diverse optimal points during training. Additionally, we present a local representation learner to further exploit a few training examples of unseen classes by generating fake queries and novel weights. Experimental results show that the proposed method outperforms the state-of-the-art methods for prevalent few-shot benchmarks and improves the generalization ability.

摘要

少样本学习旨在使用少量训练样本对未见类进行分类。虽然最近的研究表明,通过精心设计的训练策略进行标准的小批量训练可以提高未见类的泛化能力,但对于少样本学习,人们对网络中存在的一些众所周知的问题(如记忆训练统计数据)的研究还较少。为了解决这个问题,我们提出了自增强,它整合了自混合和自蒸馏。具体来说,我们提出了一种称为自混合的区域丢弃技术,其中图像的一个补丁被替换为同一图像中的其他值。通过这种丢弃效果,我们表明,通过防止我们学习数据集的特定结构,深度网络的泛化能力可以得到提高。然后,我们采用一个带有辅助分支和自己的分类器的主干网络,以强制知识共享。这种知识共享迫使每个分支在训练过程中学习不同的最优点。此外,我们提出了一种局部表示学习器,通过生成假查询和新权重来进一步利用未见类的少量训练样本。实验结果表明,所提出的方法在流行的少样本基准测试中优于最先进的方法,并提高了泛化能力。

相似文献

1
Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning.自增强:用于小样本学习的未见类别的深度网络泛化。
Neural Netw. 2021 Jun;138:140-149. doi: 10.1016/j.neunet.2021.02.007. Epub 2021 Feb 17.
2
Multi-label zero-shot learning with graph convolutional networks.基于图卷积网络的多标签零样本学习。
Neural Netw. 2020 Dec;132:333-341. doi: 10.1016/j.neunet.2020.09.010. Epub 2020 Sep 21.
3
Modality independent adversarial network for generalized zero shot image classification.模态无关对抗网络的广义零样本图像分类。
Neural Netw. 2021 Feb;134:11-22. doi: 10.1016/j.neunet.2020.11.007. Epub 2020 Nov 21.
4
Few-shot learning in deep networks through global prototyping.通过全局原型化进行深度网络的少样本学习。
Neural Netw. 2017 Oct;94:159-172. doi: 10.1016/j.neunet.2017.07.001. Epub 2017 Jul 24.
5
Self-Supervised Learning for Few-Shot Medical Image Segmentation.基于自监督学习的少样本医学图像分割。
IEEE Trans Med Imaging. 2022 Jul;41(7):1837-1848. doi: 10.1109/TMI.2022.3150682. Epub 2022 Jun 30.
6
A conditional Triplet loss for few-shot learning and its application to image co-segmentation.条件三元组损失的少样本学习及其在图像共分割中的应用。
Neural Netw. 2021 May;137:54-62. doi: 10.1016/j.neunet.2021.01.002. Epub 2021 Jan 20.
7
Greedy auto-augmentation for n-shot learning using deep neural networks.基于深度神经网络的 n-shot 学习贪婪自动增强。
Neural Netw. 2021 Mar;135:68-77. doi: 10.1016/j.neunet.2020.11.015. Epub 2020 Dec 13.
8
SAN-Net: Learning generalization to unseen sites for stroke lesion segmentation with self-adaptive normalization.SAN-Net:通过自适应归一化学习对未见部位进行中风病变分割的泛化能力。
Comput Biol Med. 2023 Apr;156:106717. doi: 10.1016/j.compbiomed.2023.106717. Epub 2023 Feb 28.
9
FSCC: Few-Shot Learning for Macromolecule Classification Based on Contrastive Learning and Distribution Calibration in Cryo-Electron Tomography.FSCC:基于冷冻电子断层扫描中对比学习和分布校准的大分子分类少样本学习
Front Mol Biosci. 2022 Jul 5;9:931949. doi: 10.3389/fmolb.2022.931949. eCollection 2022.
10
Visual-guided attentive attributes embedding for zero-shot learning.基于视觉引导的注意力属性嵌入的零样本学习。
Neural Netw. 2021 Nov;143:709-718. doi: 10.1016/j.neunet.2021.07.031. Epub 2021 Aug 11.

引用本文的文献

1
VOC-Certifire: Certifiably Robust One-Shot Spectroscopic Classification via Randomized Smoothing.VOC-Certifire:通过随机平滑实现可认证的稳健一次性光谱分类
ACS Omega. 2024 Sep 4;9(37):39033-39042. doi: 10.1021/acsomega.4c05757. eCollection 2024 Sep 17.
2
SalfMix: A Novel Single Image-Based Data Augmentation Technique Using a Saliency Map.SalfMix:一种基于显著图的新型单图像数据增强技术。
Sensors (Basel). 2021 Dec 17;21(24):8444. doi: 10.3390/s21248444.
3
Few-shot contrastive learning for image classification and its application to insulator identification.
用于图像分类的少样本对比学习及其在绝缘子识别中的应用。
Appl Intell (Dordr). 2022;52(6):6148-6163. doi: 10.1007/s10489-021-02769-6. Epub 2021 Sep 2.
4
Detecting Novel Ototoxins and Potentiation of Ototoxicity by Disease Settings.通过疾病状态检测新型耳毒素及耳毒性的增强作用。
Front Neurol. 2021 Aug 17;12:725566. doi: 10.3389/fneur.2021.725566. eCollection 2021.