Xian Yongqin, Korbar Bruno, Douze Matthijs, Torresani Lorenzo, Schiele Bernt, Akata Zeynep
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):8949-8961. doi: 10.1109/TPAMI.2021.3120550. Epub 2022 Nov 7.
Few-shot learning aims to recognize novel classes from a few examples. Although significant progress has been made in the image domain, few-shot video classification is relatively unexplored. We argue that previous methods underestimate the importance of video feature learning and propose to learn spatiotemporal features using a 3D CNN. Proposing a two-stage approach that learns video features on base classes followed by fine-tuning the classifiers on novel classes, we show that this simple baseline approach outperforms prior few-shot video classification methods by over 20 points on existing benchmarks. To circumvent the need of labeled examples, we present two novel approaches that yield further improvement. First, we leverage tag-labeled videos from a large dataset using tag retrieval followed by selecting the best clips with visual similarities. Second, we learn generative adversarial networks that generate video features of novel classes from their semantic embeddings. Moreover, we find existing benchmarks are limited because they only focus on 5 novel classes in each testing episode and introduce more realistic benchmarks by involving more novel classes, i.e., few-shot learning, as well as a mixture of novel and base classes, i.e., generalized few-shot learning. The experimental results show that our retrieval and feature generation approach significantly outperform the baseline approach on the new benchmarks.
少样本学习旨在从少量示例中识别新类别。尽管在图像领域已取得显著进展,但少样本视频分类相对而言仍未得到充分探索。我们认为先前的方法低估了视频特征学习的重要性,并建议使用三维卷积神经网络(3D CNN)来学习时空特征。我们提出了一种两阶段方法,先在基础类别上学习视频特征,然后在新类别上微调分类器,结果表明,这种简单的基线方法在现有基准测试中比先前的少样本视频分类方法高出20多个百分点。为了避免对标记示例的需求,我们提出了两种新颖的方法,可带来进一步的改进。首先,我们通过标签检索利用来自大型数据集的带标签视频,然后选择具有视觉相似性的最佳片段。其次,我们学习生成对抗网络,该网络从新类别的语义嵌入中生成视频特征。此外,我们发现现有基准测试存在局限性,因为它们在每个测试情节中仅关注5个新类别,因此我们引入了更现实的基准测试,即包含更多新类别的少样本学习,以及新类别和基础类别的混合,即广义少样本学习。实验结果表明,我们的检索和特征生成方法在新基准测试上显著优于基线方法。