School of Information Science and Engineering, Yanshan University, Qinhuangdao, 066000, China; Hebei Key Laboratory of Information Transmission and Signal Processing, Qinhuangdao, 066000, China.
Neural Netw. 2024 Nov;179:106578. doi: 10.1016/j.neunet.2024.106578. Epub 2024 Jul 26.
Self-supervised contrastive learning draws on power representational models to acquire generic semantic features from unlabeled data, and the key to training such models lies in how accurately to track motion features. Previous video contrastive learning methods have extensively used spatially or temporally augmentation as similar instances, resulting in models that are more likely to learn static backgrounds than motion features. To alleviate the background shortcuts, in this paper, we propose a cross-view motion consistent (CVMC) self-supervised video inter-intra contrastive model to focus on the learning of local details and long-term temporal relationships. Specifically, we first extract the dynamic features of consecutive video snippets and then align these features based on multi-view motion consistency. Meanwhile, we compare the optimized dynamic features for instance comparison of different videos and local spatial fine-grained with temporal order in the same video, respectively. Ultimately, the joint optimization of spatio-temporal alignment and motion discrimination effectively fills the challenges of the missing components of instance recognition, spatial compactness, and temporal perception in self-supervised learning. Experimental results show that our proposed self-supervised model can effectively learn visual representation information and achieve highly competitive performance compared to other state-of-the-art methods in both action recognition and video retrieval tasks.
自监督对比学习利用强大的表示模型从无标签数据中获取通用语义特征,而训练此类模型的关键在于如何准确地跟踪运动特征。以前的视频对比学习方法广泛使用空间或时间增强作为相似实例,导致模型更有可能学习静态背景而不是运动特征。为了缓解背景捷径,本文提出了一种跨视图运动一致(CVMC)自监督视频内-间对比模型,专注于学习局部细节和长期时间关系。具体来说,我们首先提取连续视频片段的动态特征,然后根据多视图运动一致性对齐这些特征。同时,我们比较优化后的动态特征,以分别对不同视频的实例进行比较,以及同一视频中的局部空间精细结构和时间顺序。最终,时空对齐和运动辨别联合优化有效地弥补了自监督学习中实例识别、空间紧凑性和时间感知缺失组件的挑战。实验结果表明,与其他最先进的方法相比,我们提出的自监督模型可以有效地学习视觉表示信息,并在动作识别和视频检索任务中取得极具竞争力的性能。