Department of Computing, School of Electrical Engineering and Computer Science, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan.
The College of Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China.
Sensors (Basel). 2020 Feb 10;20(3):949. doi: 10.3390/s20030949.
Person re-identification (re-ID) is among the essential components that play an integral role in constituting an automated surveillance environment. Majorly, the problem is tackled using data acquired from vision sensors using appearance-based features, which are strongly dependent on visual cues such as color, texture, etc., consequently limiting the precise re-identification of an individual. To overcome such strong dependence on visual features, many researchers have tackled the re-identification problem using human gait, which is believed to be unique and provide a distinctive biometric signature that is particularly suitable for re-ID in uncontrolled environments. However, image-based gait analysis often fails to extract quality measurements of an individual's motion patterns owing to problems related to variations in viewpoint, illumination (daylight), clothing, worn accessories, etc. To this end, in contrast to relying on image-based motion measurement, this paper demonstrates the potential to re-identify an individual using inertial measurements units (IMU) based on two common sensors, namely gyroscope and accelerometer. The experiment was carried out over data acquired using smartphones and wearable IMUs from a total of 86 randomly selected individuals including 49 males and 37 females between the ages of 17 and 72 years. The data signals were first segmented into single steps and strides, which were separately fed to train a sequential deep recurrent neural network to capture implicit arbitrary long-term temporal dependencies. The experimental setup was devised in a fashion to train the network on all the subjects using data related to half of the step and stride sequences only while the inference was performed on the remaining half for the purpose of re-identification. The obtained experimental results demonstrate the potential to reliably and accurately re-identify an individual based on one's inertial sensor data.
人体重识别(re-ID)是构成自动监控环境的基本组成部分之一。该问题主要通过使用基于外观的特征从视觉传感器获取的数据来解决,这些特征强烈依赖于视觉线索,如颜色、纹理等,从而限制了个体的精确重识别。为了克服对视觉特征的强烈依赖,许多研究人员使用人体步态来解决重识别问题,因为步态被认为是独特的,提供了一种特别适合在不受控制的环境中进行重识别的生物特征签名。然而,基于图像的步态分析由于与视角、光照(日光)、服装、佩戴的配饰等变化有关的问题,往往无法提取个体运动模式的高质量测量值。为此,与依赖基于图像的运动测量相比,本文展示了使用基于两个常见传感器(陀螺仪和加速度计)的惯性测量单元(IMU)重新识别个体的潜力。该实验是在智能手机和可穿戴 IMU 采集的数据上进行的,共有 86 名随机选择的个体参与,其中包括 49 名男性和 37 名女性,年龄在 17 岁至 72 岁之间。数据信号首先被分割成单步和跨步,分别输入到一个顺序深度递归神经网络中进行训练,以捕捉隐含的任意长期时间依赖性。实验设置的设计方式是使用仅与一半步和跨步序列相关的数据对所有受试者进行网络训练,而推理则是在剩余的一半数据上进行,以进行重识别。实验结果表明,基于个体的惯性传感器数据可靠且准确地进行个体重识别是具有潜力的。