Suppr超能文献

基于骨架云彩色化的自监督3D动作表征学习

Self-Supervised 3D Action Representation Learning With Skeleton Cloud Colorization.

作者信息

Yang Siyuan, Liu Jun, Lu Shijian, Hwa Er Meng, Hu Yongjian, Kot Alex C

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):509-524. doi: 10.1109/TPAMI.2023.3325463. Epub 2023 Dec 5.

Abstract

3D Skeleton-based human action recognition has attracted increasing attention in recent years. Most of the existing work focuses on supervised learning which requires a large number of labeled action sequences that are often expensive and time-consuming to annotate. In this paper, we address self-supervised 3D action representation learning for skeleton-based action recognition. We investigate self-supervised representation learning and design a novel skeleton cloud colorization technique that is capable of learning spatial and temporal skeleton representations from unlabeled skeleton sequence data. We represent a skeleton action sequence as a 3D skeleton cloud and colorize each point in the cloud according to its temporal and spatial orders in the original (unannotated) skeleton sequence. Leveraging the colorized skeleton point cloud, we design an auto-encoder framework that can learn spatial-temporal features from the artificial color labels of skeleton joints effectively. Specifically, we design a two-steam pretraining network that leverages fine-grained and coarse-grained colorization to learn multi-scale spatial-temporal features. In addition, we design a Masked Skeleton Cloud Repainting task that can pretrain the designed auto-encoder framework to learn informative representations. We evaluate our skeleton cloud colorization approach with linear classifiers trained under different configurations, including unsupervised, semi-supervised, fully-supervised, and transfer learning settings. Extensive experiments on NTU RGB+D, NTU RGB+D 120, PKU-MMD, NW-UCLA, and UWA3D datasets show that the proposed method outperforms existing unsupervised and semi-supervised 3D action recognition methods by large margins and achieves competitive performance in supervised 3D action recognition as well.

摘要

近年来,基于3D骨骼的人体动作识别越来越受到关注。现有的大多数工作都集中在监督学习上,这需要大量带标签的动作序列,而这些序列的标注通常成本高昂且耗时。在本文中,我们针对基于骨骼的动作识别进行自监督3D动作表示学习。我们研究自监督表示学习,并设计了一种新颖的骨骼云着色技术,该技术能够从未标注的骨骼序列数据中学习空间和时间骨骼表示。我们将骨骼动作序列表示为3D骨骼云,并根据其在原始(未标注)骨骼序列中的时间和空间顺序对云中的每个点进行着色。利用着色后的骨骼点云,我们设计了一个自动编码器框架,该框架可以有效地从骨骼关节的人工颜色标签中学习时空特征。具体来说,我们设计了一个双流预训练网络,利用细粒度和粗粒度着色来学习多尺度时空特征。此外,我们设计了一个掩码骨骼云重绘任务,可以对设计的自动编码器框架进行预训练,以学习信息丰富的表示。我们使用在不同配置下训练的线性分类器评估我们的骨骼云着色方法,包括无监督、半监督、全监督和迁移学习设置。在NTU RGB+D、NTU RGB+D 120、PKU-MMD、NW-UCLA和UWA3D数据集上进行的大量实验表明,所提出的方法在很大程度上优于现有的无监督和半监督3D动作识别方法,并且在监督3D动作识别中也取得了有竞争力的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验