Rafiq Mehrab, Almujally Nouf Abdullah, Algarni Asaad, Alshehri Mohammed, AlQahtani Yahya, Jalal Ahmad, Liu Hui
Department of Computer Science, Air University, Islamabad, Pakistan.
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Front Bioeng Biotechnol. 2025 Jul 9;13:1558529. doi: 10.3389/fbioe.2025.1558529. eCollection 2025.
Advancements in sensing technologies have enabled the integration of inertial sensors, such as accelerometers and gyroscopes, into everyday devices like smartphones and wearables. These sensors, initially intended to enhance device functionality, are now pivotal in applications such as Human Locomotion Recognition (HLR), with relevance in sports, healthcare, rehabilitation, and context-aware systems. This study presents a robust system for accurately recognizing human movement and localization characteristics using sensor data.
Two datasets were used: the Extrasensory dataset and the KU-HAR dataset. The Extrasensory dataset includes multimodal sensor data (IMU, GPS, and audio) from 60 participants, while the KU-HAR dataset provides accelerometer and gyroscope data from 90 participants performing 18 distinct activities. Raw sensor signals were first denoised using a second-order Butterworth filter, and segmentation was performed using Hamming windows. Feature extraction included Skewness, Energy, Kurtosis, Linear Prediction Cepstral Coefficients (LPCC), and Dynamic Time Warping (DTW) for locomotion, as well as Step Count and Step Length for localization. Yeo-Johnson power transformation was employed to optimize the extracted features.
The proposed system achieved 90% accuracy on the Extrasensory dataset and 91% on the KU-HAR dataset. These results surpass the performance of several existing state-of-the-art methods. Statistical analysis and additional testing confirmed the robustness and generalization capabilities of the model across both datasets.
The developed system demonstrates strong performance in recognizing human locomotion and localization across different sensor environments, even when dealing with noisy data. Its effectiveness in real-world scenarios highlights its potential for integration into healthcare monitoring, physical rehabilitation, and intelligent wearable systems. The model's scalability and high accuracy support its applicability for deployment on embedded platforms in future implementations.
传感技术的进步使得加速度计和陀螺仪等惯性传感器能够集成到智能手机和可穿戴设备等日常设备中。这些传感器最初旨在增强设备功能,如今在诸如人类运动识别(HLR)等应用中起着关键作用,在体育、医疗保健、康复和情境感知系统中具有重要意义。本研究提出了一种使用传感器数据准确识别人类运动和定位特征的强大系统。
使用了两个数据集:超感官数据集和KU - HAR数据集。超感官数据集包括来自60名参与者的多模态传感器数据(惯性测量单元、全球定位系统和音频),而KU - HAR数据集提供了90名参与者进行18种不同活动时的加速度计和陀螺仪数据。原始传感器信号首先使用二阶巴特沃斯滤波器进行去噪,并使用汉明窗进行分割。特征提取包括用于运动的偏度、能量、峰度、线性预测倒谱系数(LPCC)和动态时间规整(DTW),以及用于定位的步数和步长。采用Yeo - Johnson幂变换来优化提取的特征。
所提出的系统在超感官数据集上达到了90%的准确率,在KU - HAR数据集上达到了91%的准确率。这些结果超过了几种现有先进方法的性能。统计分析和额外测试证实了该模型在两个数据集上的稳健性和泛化能力。
所开发的系统在识别不同传感器环境下的人类运动和定位方面表现出色,即使在处理噪声数据时也是如此。其在实际场景中的有效性凸显了其集成到医疗监测、物理康复和智能可穿戴系统中的潜力。该模型的可扩展性和高精度支持其在未来实现中部署在嵌入式平台上的适用性。