Suppr超能文献

AL-SAR: Active Learning for Skeleton-Based Action Recognition.

作者信息

Li Jingyuan, Le Trung, Shlizerman Eli

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16966-16974. doi: 10.1109/TNNLS.2023.3297853. Epub 2024 Oct 29.

Abstract

Action recognition from temporal multivariate sequences of features, such as identifying human actions, is typically approached by supervised training as it requires many ground truth annotations to reach high recognition accuracy. Unsupervised methods for the organization of sequences into clusters have been introduced, however, such methods continue to require annotations to associate clusters with actions. The challenges in annotation necessitate an effective classification methodology that minimizes the required number of labels. Active learning (AL) approaches have been proposed to address these challenges and were able to establish robust results on image classification. Such approaches are not directly applicable to sequences, since for sequences, the variations are in both spatial and temporal domains. In this brief, we introduce a novel method for AL for sequences, called "AL-SAR," which combines unsupervised training with sparsely supervised annotation. In particular, AL-SAR employs a multi-head mechanism for robust uncertainty evaluation of the latent space learned by an encoder-decoder framework. It aims to iteratively select a sparse set of samples, which annotation contributes the most to the disentanglement of the latent space. We evaluate our system on common benchmark datasets with multiple sequences and actions, such as NW-UCLA, NTU RGB+D 60, and UWA3D. Our results indicate that AL-SAR coupled with encoder-decoder network outperforms other AL methods coupled with the same network structure.

摘要

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验