Suppr超能文献

通过自监督训练实现无监督少样本特征学习

Unsupervised Few-Shot Feature Learning via Self-Supervised Training.

作者信息

Ji Zilong, Zou Xiaolong, Huang Tiejun, Wu Si

机构信息

State Key Laboratory of Cognitive Neuroscience & Learning, Beijing Normal University, Beijing, China.

School of Electronics Engineering & Computer Science, Peking University, Beijing, China.

出版信息

Front Comput Neurosci. 2020 Oct 14;14:83. doi: 10.3389/fncom.2020.00083. eCollection 2020.

Abstract

Learning from limited exemplars (few-shot learning) is a fundamental, unsolved problem that has been laboriously explored in the machine learning community. However, current few-shot learners are mostly supervised and rely heavily on a large amount of labeled examples. Unsupervised learning is a more natural procedure for cognitive mammals and has produced promising results in many machine learning tasks. In this paper, we propose an unsupervised feature learning method for few-shot learning. The proposed model consists of two alternate processes, progressive clustering and episodic training. The former generates pseudo-labeled training examples for constructing episodic tasks; and the later trains the few-shot learner using the generated episodic tasks which further optimizes the feature representations of data. The two processes facilitate each other, and eventually produce a high quality few-shot learner. In our experiments, our model achieves good generalization performance in a variety of downstream few-shot learning tasks on Omniglot and MiniImageNet. We also construct a new few-shot person re-identification dataset FS-Market1501 to demonstrate the feasibility of our model to a real-world application.

摘要

从有限样本中学习(少样本学习)是机器学习社区中一个基本的、尚未解决的问题,人们一直在努力探索。然而,当前的少样本学习器大多是有监督的,并且严重依赖大量的标记示例。无监督学习对于认知哺乳动物来说是一种更自然的过程,并且在许多机器学习任务中都取得了有前景的成果。在本文中,我们提出了一种用于少样本学习的无监督特征学习方法。所提出的模型由两个交替的过程组成,即渐进聚类和情节训练。前者生成伪标记的训练示例以构建情节任务;后者使用生成的情节任务训练少样本学习器,这进一步优化了数据的特征表示。这两个过程相互促进,最终产生一个高质量的少样本学习器。在我们的实验中,我们的模型在Omniglot和MiniImageNet上的各种下游少样本学习任务中取得了良好的泛化性能。我们还构建了一个新的少样本行人重识别数据集FS-Market1501,以证明我们的模型在实际应用中的可行性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1036/7592391/35c44d9e2f5c/fncom-14-00083-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验