• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于 3D 骨骼的人体再识别的自监督步态编码方法,具有位置感知性。

A Self-Supervised Gait Encoding Approach With Locality-Awareness for 3D Skeleton Based Person Re-Identification.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6649-6666. doi: 10.1109/TPAMI.2021.3092833. Epub 2022 Sep 14.

DOI:10.1109/TPAMI.2021.3092833
PMID:34181534
Abstract

Person re-identification (Re-ID) via gait features within 3D skeleton sequences is a newly-emerging topic with several advantages. Existing solutions either rely on hand-crafted descriptors or supervised gait representation learning. This paper proposes a self-supervised gait encoding approach that can leverage unlabeled skeleton data to learn gait representations for person Re-ID. Specifically, we first create self-supervision by learning to reconstruct unlabeled skeleton sequences reversely, which involves richer high-level semantics to obtain better gait representations. Other pretext tasks are also explored to further improve self-supervised learning. Second, inspired by the fact that motion's continuity endows adjacent skeletons in one skeleton sequence and temporally consecutive skeleton sequences with higher correlations (referred as locality in 3D skeleton data), we propose a locality-aware attention mechanism and a locality-aware contrastive learning scheme, which aim to preserve locality-awareness on intra-sequence level and inter-sequence level respectively during self-supervised learning. Last, with context vectors learned by our locality-aware attention mechanism and contrastive learning scheme, a novel feature named Constrastive Attention-based Gait Encodings (CAGEs) is designed to represent gait effectively. Empirical evaluations show that our approach significantly outperforms skeleton-based counterparts by 15-40 percent Rank-1 accuracy, and it even achieves superior performance to numerous multi-modal methods with extra RGB or depth information. Our codes are available at https://github.com/Kali-Hac/Locality-Awareness-SGE.

摘要

基于 3D 骨骼序列的步态特征的人体重识别(Re-ID)是一个新兴的话题,具有多个优势。现有的解决方案要么依赖于手工制作的描述符,要么依赖于监督步态表示学习。本文提出了一种自监督步态编码方法,可以利用未标记的骨骼数据学习人体 Re-ID 的步态表示。具体来说,我们首先通过学习反向重建未标记的骨骼序列来创建自我监督,这涉及更丰富的高级语义,以获得更好的步态表示。还探索了其他的预训练任务,以进一步提高自监督学习的效果。其次,受运动的连续性赋予一个骨骼序列中的相邻骨骼和时间上连续的骨骼序列更高相关性的事实的启发(在 3D 骨骼数据中称为局部性),我们提出了一种局部感知注意力机制和局部感知对比学习方案,旨在在自监督学习过程中分别在序列内和序列间保持局部感知。最后,利用我们的局部感知注意力机制和对比学习方案学习到的上下文向量,设计了一种新的特征,称为基于对比注意力的步态编码(CAGEs),以有效地表示步态。实验评估表明,我们的方法在 Rank-1 准确率上比基于骨骼的方法高出 15-40%,甚至优于具有额外 RGB 或深度信息的许多多模态方法。我们的代码可以在 https://github.com/Kali-Hac/Locality-Awareness-SGE 上获得。

相似文献

1
A Self-Supervised Gait Encoding Approach With Locality-Awareness for 3D Skeleton Based Person Re-Identification.一种基于 3D 骨骼的人体再识别的自监督步态编码方法,具有位置感知性。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6649-6666. doi: 10.1109/TPAMI.2021.3092833. Epub 2022 Sep 14.
2
Self-Supervised 3D Action Representation Learning With Skeleton Cloud Colorization.基于骨架云彩色化的自监督3D动作表征学习
IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):509-524. doi: 10.1109/TPAMI.2023.3325463. Epub 2023 Dec 5.
3
X-Invariant Contrastive Augmentation and Representation Learning for Semi-Supervised Skeleton-Based Action Recognition.用于基于骨架的半监督动作识别的X不变对比增强与表示学习
IEEE Trans Image Process. 2022;31:3852-3867. doi: 10.1109/TIP.2022.3175605. Epub 2022 Jun 2.
4
Multi-Granularity Anchor-Contrastive Representation Learning for Semi-Supervised Skeleton-Based Action Recognition.多粒度锚点对比学习在半监督骨架动作识别中的应用
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7559-7576. doi: 10.1109/TPAMI.2022.3222871. Epub 2023 May 5.
5
Self-Supervised Action Representation Learning Based on Asymmetric Skeleton Data Augmentation.基于非对称骨骼数据增强的自监督动作表示学习。
Sensors (Basel). 2022 Nov 20;22(22):8989. doi: 10.3390/s22228989.
6
Parts2Whole: Self-supervised Contrastive Learning via Reconstruction.从部分到整体:通过重建进行自监督对比学习
Domain Adapt Represent Transf Distrib Collab Learn (2020). 2020 Oct;12444:85-95. doi: 10.1007/978-3-030-60548-3_9. Epub 2020 Sep 26.
7
Language-Guided 3-D Action Feature Learning Without Ground-Truth Sample Class Label.
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9356-9369. doi: 10.1109/TNNLS.2024.3409613. Epub 2025 May 2.
8
Contrastive self-supervised representation learning without negative samples for multimodal human action recognition.用于多模态人类动作识别的无负样本对比自监督表征学习
Front Neurosci. 2023 Jul 5;17:1225312. doi: 10.3389/fnins.2023.1225312. eCollection 2023.
9
ConMLP: MLP-Based Self-Supervised Contrastive Learning for Skeleton Data Analysis and Action Recognition.ConMLP:基于 MLP 的自监督对比学习在骨骼数据分析和动作识别中的应用。
Sensors (Basel). 2023 Feb 22;23(5):2452. doi: 10.3390/s23052452.
10
Contrast-Reconstruction Representation Learning for Self-Supervised Skeleton-Based Action Recognition.用于基于自监督骨架的动作识别的对比重建表示学习
IEEE Trans Image Process. 2022;31:6224-6238. doi: 10.1109/TIP.2022.3207577. Epub 2022 Sep 28.

引用本文的文献

1
A Multi-sensor Gait Dataset Collected Under Non-standardized Dual-Task Conditions.在非标准化双任务条件下收集的多传感器步态数据集。
Sci Data. 2025 Jul 1;12(1):1121. doi: 10.1038/s41597-025-05458-y.
2
A comprehensive review of gait analysis using deep learning approaches in criminal investigation.使用深度学习方法进行刑事调查的步态分析综合综述。
PeerJ Comput Sci. 2024 Nov 22;10:e2456. doi: 10.7717/peerj-cs.2456. eCollection 2024.
3
Reconsideration of Bertillonage in the age of digitalisation: Digital anthropometric patterns as a promising method for establishing identity.
数字化时代对贝蒂荣人身测定法的重新审视:数字人体测量模式作为一种建立身份的有前景的方法。
Forensic Sci Int Synerg. 2023 Dec 27;8:100452. doi: 10.1016/j.fsisyn.2023.100452. eCollection 2024.
4
Model-based and model-free deep features fusion for high performed human gait recognition.基于模型与无模型的深度特征融合用于高性能人体步态识别。
J Supercomput. 2023 Mar 19:1-38. doi: 10.1007/s11227-023-05156-9.
5
A multi-modal open dataset for mental-disorder analysis.多模态开放精神障碍分析数据集。
Sci Data. 2022 Apr 19;9(1):178. doi: 10.1038/s41597-022-01211-x.