Ghadi Yazeed, Akhter Israr, Alarfaj Mohammed, Jalal Ahmad, Kim Kibum
Department of Computer Science and Software Engineering, Al Ain University, Al Ain, UAE.
Department of Computer Science, Air University, Islamabad, Pakistan.
PeerJ Comput Sci. 2021 Nov 19;7:e764. doi: 10.7717/peerj-cs.764. eCollection 2021.
The study of human posture analysis and gait event detection from various types of inputs is a key contribution to the human life log. With the help of this research and technologies humans can save costs in terms of time and utility resources. In this paper we present a robust approach to human posture analysis and gait event detection from complex video-based data. For this, initially posture information, landmark information are extracted, and human 2D skeleton mesh are extracted, using this information set we reconstruct the human 2D to 3D model. Contextual features, namely, degrees of freedom over detected body parts, joint angle information, periodic and non-periodic motion, and human motion direction flow, are extracted. For features mining, we applied the rule-based features mining technique and, for gait event detection and classification, the deep learning-based CNN technique is applied over the mpii-video pose, the COCO, and the pose track datasets. For the mpii-video pose dataset, we achieved a human landmark detection mean accuracy of 87.09% and a gait event recognition mean accuracy of 90.90%. For the COCO dataset, we achieved a human landmark detection mean accuracy of 87.36% and a gait event recognition mean accuracy of 89.09%. For the pose track dataset, we achieved a human landmark detection mean accuracy of 87.72% and a gait event recognition mean accuracy of 88.18%. The proposed system performance shows a significant improvement compared to existing state-of-the-art frameworks.
从各种类型的输入中研究人体姿态分析和步态事件检测,对人类生活日志具有关键贡献。借助这项研究和技术,人类可以在时间和公用资源方面节省成本。在本文中,我们提出了一种从基于复杂视频的数据中进行人体姿态分析和步态事件检测的稳健方法。为此,首先提取姿态信息、地标信息,并提取人体二维骨架网格,利用该信息集重建人体二维到三维模型。提取上下文特征,即检测到的身体部位的自由度、关节角度信息、周期性和非周期性运动以及人体运动方向流。对于特征挖掘,我们应用了基于规则的特征挖掘技术,对于步态事件检测和分类,在mpii视频姿态、COCO和姿态跟踪数据集上应用了基于深度学习的CNN技术。对于mpii视频姿态数据集,我们实现了人体地标检测平均准确率为87.09%,步态事件识别平均准确率为90.90%。对于COCO数据集,我们实现了人体地标检测平均准确率为87.36%,步态事件识别平均准确率为89.09%。对于姿态跟踪数据集,我们实现了人体地标检测平均准确率为87.72%,步态事件识别平均准确率为88.18%。与现有的最先进框架相比,所提出的系统性能有显著提高。