Computer and Information Science Department, University of Michigan-Dearborn, Dearborn, MI 48128, USA.
Sensors (Basel). 2017 Nov 27;17(12):2735. doi: 10.3390/s17122735.
The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.
可穿戴传感器(如智能手表)的广泛使用为用户提供了有价值的连续数据,例如人类运动,这些数据可用于根据个人的运动模式(如步态)识别个人。已经提出了几种方法来从步态运动数据中提取各种启发式和高级特征,以提取有区别的步态特征并将目标个体与其他人区分开来。但是,手动和手工制作的特征提取容易出错且具有主观性。此外,从惯性传感器收集的运动数据具有复杂的结构,并且手动特征提取模块与预测学习模型之间的分离可能会限制泛化能力。在本文中,我们提出了一种使用人体步态周期的时频(TF)扩展来进行人体步态识别的新方法,以捕获关节的二维(2D)光谱和步态周期的时间模式。然后,我们设计了一个深度卷积神经网络(DCNN),用于从二维扩展的步态周期中提取有区别的特征,并以有区别的方式联合优化识别模型和光谱-时间特征。我们从五个惯性传感器收集了人体每个受试者的原始运动数据,这些传感器分别放置在胸部,下背部,右手腕,右膝和右脚踝处,以研究传感器位置对步态识别性能的影响。然后,我们提出了两种早期(输入级)和晚期(决策级评分)多传感器融合方法,以提高步态识别的泛化性能。我们特别提出了最小错误评分融合(MESF)方法,该方法通过迭代方式最小化训练数据上的错误率,在决策级上有区别地学习各个 DCNN 评分的线性融合权重。有 10 位受试者参加了这项研究,因此,该问题是 10 类识别任务。根据我们的实验结果,使用最佳的单个 IMU 和 2DTF-DCNN 实现了 91%的受试者识别准确率。然后,我们研究了我们提出的早期和晚期传感器融合方法,这两种方法将系统的步态识别精度分别提高到 93.36%和 97.06%。