Sun Wenhao, Lu Guangda, Zhao Zhuangzhuang, Guo Tinghang, Qin Zhuanping, Han Yu
School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China.
Tianjin Key Laboratory of Information Sensing & Intelligent Control, Tianjin 300222, China.
Entropy (Basel). 2023 May 23;25(6):837. doi: 10.3390/e25060837.
Gait recognition is one of the important research directions of biometric authentication technology. However, in practical applications, the original gait data is often short, and a long and complete gait video is required for successful recognition. Also, the gait images from different views have a great influence on the recognition effect. To address the above problems, we designed a gait data generation network for expanding the cross-view image data required for gait recognition, which provides sufficient data input for feature extraction branching with gait silhouette as the criterion. In addition, we propose a gait motion feature extraction network based on regional time-series coding. By independently time-series coding the joint motion data within different regions of the body, and then combining the time-series data features of each region with secondary coding, we obtain the unique motion relationships between regions of the body. Finally, bilinear matrix decomposition pooling is used to fuse spatial silhouette features and motion time-series features to obtain complete gait recognition under shorter time-length video input. We use the OUMVLP-Pose and CASIA-B datasets to validate the silhouette image branching and motion time-series branching, respectively, and employ evaluation metrics such as IS entropy value and Rank-1 accuracy to demonstrate the effectiveness of our design network. Finally, we also collect gait-motion data in the real world and test them in a complete two-branch fusion network. The experimental results show that the network we designed can effectively extract the time-series features of human motion and achieve the expansion of multi-view gait data. The real-world tests also prove that our designed method has good results and feasibility in the problem of gait recognition with short-time video as input data.
步态识别是生物特征认证技术的重要研究方向之一。然而,在实际应用中,原始步态数据往往较短,而成功识别需要长且完整的步态视频。此外,不同视角的步态图像对识别效果有很大影响。为了解决上述问题,我们设计了一个步态数据生成网络,用于扩展步态识别所需的跨视角图像数据,为以步态轮廓为准则的特征提取分支提供足够的数据输入。此外,我们提出了一种基于区域时间序列编码的步态运动特征提取网络。通过对身体不同区域内的关节运动数据进行独立的时间序列编码,然后将每个区域的时间序列数据特征进行二次编码组合,我们获得了身体各区域之间独特的运动关系。最后,使用双线性矩阵分解池化来融合空间轮廓特征和运动时间序列特征,以在较短时长的视频输入下实现完整的步态识别。我们分别使用OUMVLP-Pose和CASIA-B数据集来验证轮廓图像分支和运动时间序列分支,并采用IS熵值和Rank-1准确率等评估指标来证明我们设计的网络的有效性。最后,我们还在现实世界中收集了步态运动数据,并在完整的双分支融合网络中进行测试。实验结果表明,我们设计的网络能够有效地提取人体运动的时间序列特征,并实现多视角步态数据的扩展。现实世界测试也证明,我们设计的方法在以短时间视频作为输入数据的步态识别问题上具有良好的效果和可行性。