Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul, 02841, Republic of Korea.
Department of Artificial Intelligence, Korea University, Anam-dong, Seongbuk-gu, Seoul, 02841, Republic of Korea.
Neural Netw. 2021 Jun;138:140-149. doi: 10.1016/j.neunet.2021.02.007. Epub 2021 Feb 17.
Few-shot learning aims to classify unseen classes with a few training examples. While recent works have shown that standard mini-batch training with carefully designed training strategies can improve generalization ability for unseen classes, well-known problems in deep networks such as memorizing training statistics have been less explored for few-shot learning. To tackle this issue, we propose self-augmentation that consolidates self-mix and self-distillation. Specifically, we propose a regional dropout technique called self-mix, in which a patch of an image is substituted into other values in the same image. With this dropout effect, we show that the generalization ability of deep networks can be improved as it prevents us from learning specific structures of a dataset. Then, we employ a backbone network that has auxiliary branches with its own classifier to enforce knowledge sharing. This sharing of knowledge forces each branch to learn diverse optimal points during training. Additionally, we present a local representation learner to further exploit a few training examples of unseen classes by generating fake queries and novel weights. Experimental results show that the proposed method outperforms the state-of-the-art methods for prevalent few-shot benchmarks and improves the generalization ability.
少样本学习旨在使用少量训练样本对未见类进行分类。虽然最近的研究表明,通过精心设计的训练策略进行标准的小批量训练可以提高未见类的泛化能力,但对于少样本学习,人们对网络中存在的一些众所周知的问题(如记忆训练统计数据)的研究还较少。为了解决这个问题,我们提出了自增强,它整合了自混合和自蒸馏。具体来说,我们提出了一种称为自混合的区域丢弃技术,其中图像的一个补丁被替换为同一图像中的其他值。通过这种丢弃效果,我们表明,通过防止我们学习数据集的特定结构,深度网络的泛化能力可以得到提高。然后,我们采用一个带有辅助分支和自己的分类器的主干网络,以强制知识共享。这种知识共享迫使每个分支在训练过程中学习不同的最优点。此外,我们提出了一种局部表示学习器,通过生成假查询和新权重来进一步利用未见类的少量训练样本。实验结果表明,所提出的方法在流行的少样本基准测试中优于最先进的方法,并提高了泛化能力。