Zhang Rujun, Liu Qifan
College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China.
Front Comput Neurosci. 2023 Jan 5;16:1075294. doi: 10.3389/fncom.2022.1075294. eCollection 2022.
Deep learning has achieved enormous success in various computer tasks. The excellent performance depends heavily on adequate training datasets, however, it is difficult to obtain abundant samples in practical applications. Few-shot learning is proposed to address the data limitation problem in the training process, which can perform rapid learning with few samples by utilizing prior knowledge. In this paper, we focus on few-shot classification to conduct a survey about the recent methods. First, we elaborate on the definition of the few-shot classification problem. Then we propose a newly organized taxonomy, discuss the application scenarios in which each method is effective, and compare the pros and cons of different methods. We classify few-shot image classification methods from four perspectives: (i) Data augmentation, which contains sample-level and task-level data augmentation. (ii) Metric-based method, which analyzes both feature embedding and metric function. (iii) Optimization method, which is compared from the aspects of self-learning and mutual learning. (iv) Model-based method, which is discussed from the perspectives of memory-based, rapid adaptation and multi-task learning. Finally, we conduct the conclusion and prospect of this paper.
深度学习在各种计算机任务中取得了巨大成功。然而,其卓越性能在很大程度上依赖于充足的训练数据集,而在实际应用中很难获得大量样本。少样本学习被提出来解决训练过程中的数据限制问题,它可以通过利用先验知识,用少量样本进行快速学习。在本文中,我们聚焦于少样本分类,对近期方法进行综述。首先,我们详细阐述少样本分类问题的定义。然后,我们提出一种新的分类法,讨论每种方法有效的应用场景,并比较不同方法的优缺点。我们从四个角度对少样本图像分类方法进行分类:(i)数据增强,包括样本级和任务级数据增强。(ii)基于度量的方法,它同时分析特征嵌入和度量函数。(iii)优化方法,从自学习和互学习方面进行比较。(iv)基于模型的方法,从基于记忆、快速适应和多任务学习的角度进行讨论。最后,我们对本文进行总结和展望。