Suppr超能文献

一种基于Kinect传感器的在线连续人体动作识别算法。

An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor.

作者信息

Zhu Guangming, Zhang Liang, Shen Peiyi, Song Juan

机构信息

School of Software, Xidian University, Xi'an 710071, China.

出版信息

Sensors (Basel). 2016 Jan 28;16(2):161. doi: 10.3390/s16020161.

Abstract

Continuous human action recognition (CHAR) is more practical in human-robot interactions. In this paper, an online CHAR algorithm is proposed based on skeletal data extracted from RGB-D images captured by Kinect sensors. Each human action is modeled by a sequence of key poses and atomic motions in a particular order. In order to extract key poses and atomic motions, feature sequences are divided into pose feature segments and motion feature segments, by use of the online segmentation method based on potential differences of features. Likelihood probabilities that each feature segment can be labeled as the extracted key poses or atomic motions, are computed in the online model matching process. An online classification method with variable-length maximal entropy Markov model (MEMM) is performed based on the likelihood probabilities, for recognizing continuous human actions. The variable-length MEMM method ensures the effectiveness and efficiency of the proposed CHAR method. Compared with the published CHAR methods, the proposed algorithm does not need to detect the start and end points of each human action in advance. The experimental results on public datasets show that the proposed algorithm is effective and highly-efficient for recognizing continuous human actions.

摘要

连续人体动作识别(CHAR)在人机交互中更具实用性。本文基于从Kinect传感器捕获的RGB-D图像中提取的骨骼数据,提出了一种在线CHAR算法。每个人体动作都由一系列关键姿势和特定顺序的原子运动建模。为了提取关键姿势和原子运动,通过基于特征势差的在线分割方法,将特征序列分为姿势特征段和运动特征段。在在线模型匹配过程中,计算每个特征段可被标记为提取的关键姿势或原子运动的似然概率。基于似然概率,执行一种采用可变长度最大熵马尔可夫模型(MEMM)的在线分类方法,用于识别连续人体动作。可变长度MEMM方法确保了所提出的CHAR方法的有效性和效率。与已发表的CHAR方法相比,所提出的算法无需预先检测每个人体动作的起点和终点。在公共数据集上的实验结果表明,所提出的算法对于识别连续人体动作是有效且高效的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ed80/4801539/93fa5c1b58c7/sensors-16-00161-g001.jpg

相似文献

1
An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor.
Sensors (Basel). 2016 Jan 28;16(2):161. doi: 10.3390/s16020161.
2
Structured Time Series Analysis for Human Action Segmentation and Recognition.
IEEE Trans Pattern Anal Mach Intell. 2014 Jul;36(7):1414-27. doi: 10.1109/TPAMI.2013.244.
4
Continuous human action recognition using depth-MHI-HOG and a spotter model.
Sensors (Basel). 2015 Mar 3;15(3):5197-227. doi: 10.3390/s150305197.
5
Inverse Dynamics for Action Recognition.
IEEE Trans Cybern. 2013 Aug;43(4):1226-36. doi: 10.1109/TSMCB.2012.2226879.
6
Filtered pose graph for efficient kinect pose reconstruction.
Multimed Tools Appl. 2017;76(3):4291-4312. doi: 10.1007/s11042-016-3546-4. Epub 2016 May 13.
7
Exploring 3D Human Action Recognition: from Offline to Online.
Sensors (Basel). 2018 Feb 20;18(2):633. doi: 10.3390/s18020633.
8
Discriminative Relational Representation Learning for RGB-D Action Recognition.
IEEE Trans Image Process. 2016 Jun;25(6):2856-2865. doi: 10.1109/TIP.2016.2556940. Epub 2016 Apr 20.
9
Modeling 4D Human-Object Interactions for Joint Event Segmentation, Recognition, and Object Localization.
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1165-1179. doi: 10.1109/TPAMI.2016.2574712. Epub 2016 Jun 1.
10
Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors.
Sensors (Basel). 2016 Feb 4;16(2):189. doi: 10.3390/s16020189.

引用本文的文献

2
Human Activity Recognition via Hybrid Deep Learning Based Model.
Sensors (Basel). 2022 Jan 1;22(1):323. doi: 10.3390/s22010323.
3
Exploring 3D Human Action Recognition: from Offline to Online.
Sensors (Basel). 2018 Feb 20;18(2):633. doi: 10.3390/s18020633.
5
A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data.
Sensors (Basel). 2017 May 11;17(5):1100. doi: 10.3390/s17051100.
6
Hierarchical Activity Recognition Using Smart Watches and RGB-Depth Cameras.
Sensors (Basel). 2016 Oct 15;16(10):1713. doi: 10.3390/s16101713.

本文引用的文献

1
Anticipating Human Activities Using Object Affordances for Reactive Robotic Response.
IEEE Trans Pattern Anal Mach Intell. 2016 Jan;38(1):14-29. doi: 10.1109/TPAMI.2015.2430335.
2
Learning Actionlet Ensemble for 3D Human Action Recognition.
IEEE Trans Pattern Anal Mach Intell. 2014 May;36(5):914-27. doi: 10.1109/TPAMI.2013.198.
3
Continuous human action recognition using depth-MHI-HOG and a spotter model.
Sensors (Basel). 2015 Mar 3;15(3):5197-227. doi: 10.3390/s150305197.
4
A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.
Sensors (Basel). 2014 May 20;14(5):8895-925. doi: 10.3390/s140508895.
5
Enhanced computer vision with Microsoft Kinect sensor: a review.
IEEE Trans Cybern. 2013 Oct;43(5):1318-34. doi: 10.1109/TCYB.2013.2265378. Epub 2013 Jun 25.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验