Faculty of Information Science and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450, Melaka, Malaysia.
Neural Netw. 2023 Aug;165:19-30. doi: 10.1016/j.neunet.2023.05.037. Epub 2023 May 24.
Few-shot learning aims to train a model with a limited number of base class samples to classify the novel class samples. However, to attain generalization with a limited number of samples is not a trivial task. This paper proposed a novel few-shot learning approach named Self-supervised Contrastive Learning (SCL) that enriched the model representation with multiple self-supervision objectives. Given the base class samples, the model is trained with the base class loss. Subsequently, contrastive-based self-supervision is introduced to minimize the distance between each training sample with their augmented variants to improve the sample discrimination. To recognize the distant sample, rotation-based self-supervision is proposed to enable the model to learn to recognize the rotation degree of the samples for better sample diversity. The multitask environment is introduced where each training sample is assigned with two class labels: base class label and rotation class label. Complex augmentation is put forth to help the model learn a deeper understanding of the object. The image structure of the training samples are augmented independent of the base class information. The proposed SCL is trained to minimize the base class loss, contrastive distance loss, and rotation class loss simultaneously to learn the generic features and improve the novel class performance. With the multiple self-supervision objectives, the proposed SCL outperforms state-of-the-art few-shot approaches on few-shot image classification benchmark datasets.
少样本学习旨在利用有限数量的基类样本训练模型来对新类样本进行分类。然而,用有限的样本实现泛化并非易事。本文提出了一种名为自监督对比学习(SCL)的新的少样本学习方法,该方法利用多个自监督目标丰富模型的表示。给定基类样本,使用基类损失对模型进行训练。然后,引入基于对比的自监督来最小化每个训练样本与其增强变体之间的距离,以提高样本的辨别能力。为了识别远距离样本,提出了基于旋转的自监督,使模型能够学习识别样本的旋转程度,以提高样本的多样性。引入了多任务环境,其中每个训练样本都被分配了两个类别标签:基类标签和旋转类标签。提出了复杂的增强方法,帮助模型更深入地理解物体。训练样本的图像结构是独立于基类信息进行增强的。所提出的 SCL 被训练为同时最小化基类损失、对比距离损失和旋转类损失,以学习通用特征并提高新类的性能。通过多个自监督目标,所提出的 SCL 在少样本图像分类基准数据集上优于最先进的少样本方法。