• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于短期时空特征提取和长时 IndRNN 的活动识别框架。

A Framework of Combining Short-Term Spatial/Frequency Feature Extraction and Long-Term IndRNN for Activity Recognition.

机构信息

Glasgow College, University of Electronic Science and Technology of China, Chengdu 611731, China.

School of Control Science and Engineering, Shandong University, Jinan 250061, China.

出版信息

Sensors (Basel). 2020 Dec 7;20(23):6984. doi: 10.3390/s20236984.

DOI:10.3390/s20236984
PMID:33297370
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7729609/
Abstract

Smartphone-sensors-based human activity recognition is attracting increasing interest due to the popularization of smartphones. It is a difficult long-range temporal recognition problem, especially with large intraclass distances such as carrying smartphones at different locations and small interclass distances such as taking a train or subway. To address this problem, we propose a new framework of combining short-term spatial/frequency feature extraction and a long-term independently recurrent neural network (IndRNN) for activity recognition. Considering the periodic characteristics of the sensor data, short-term temporal features are first extracted in the spatial and frequency domains. Then, the IndRNN, which can capture long-term patterns, is used to further obtain the long-term features for classification. Given the large differences when the smartphone is carried at different locations, a group-based location recognition is first developed to pinpoint the location of the smartphone. The Sussex-Huawei Locomotion (SHL) dataset from the SHL Challenge is used for evaluation. An earlier version of the proposed method won the second place award in the SHL Challenge 2020 (first place if not considering the multiple models fusion approach). The proposed method is further improved in this paper and achieves 80.72% accuracy, better than the existing methods using a single model.

摘要

基于智能手机传感器的人体活动识别由于智能手机的普及而引起了越来越多的关注。这是一个具有挑战性的长期时间识别问题,尤其是在类内距离较大(例如在不同位置携带智能手机)和类间距离较小(例如乘坐火车或地铁)的情况下。为了解决这个问题,我们提出了一种新的框架,结合了短期空间/频率特征提取和长期独立递归神经网络(IndRNN)进行活动识别。考虑到传感器数据的周期性特征,首先在空间和频率域中提取短期时间特征。然后,使用可以捕获长期模式的 IndRNN 进一步获取用于分类的长期特征。由于智能手机在不同位置携带时差异较大,首先开发了基于组的位置识别来确定智能手机的位置。该方法在 Sussex-HuaweiLocomotion (SHL) 挑战赛的 SHL 数据集上进行了评估。我们提出的方法的早期版本在 2020 年的 SHL 挑战赛中获得了第二名(如果不考虑多模型融合方法,则为第一名)。本文进一步改进了该方法,其准确率达到 80.72%,优于使用单一模型的现有方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/8baa2bdfa9f3/sensors-20-06984-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/17ac0583299a/sensors-20-06984-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/76d900b2b6c0/sensors-20-06984-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/a775b95504c3/sensors-20-06984-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/e8b85ed0f998/sensors-20-06984-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/23c269160e6f/sensors-20-06984-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/8baa2bdfa9f3/sensors-20-06984-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/17ac0583299a/sensors-20-06984-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/76d900b2b6c0/sensors-20-06984-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/a775b95504c3/sensors-20-06984-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/e8b85ed0f998/sensors-20-06984-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/23c269160e6f/sensors-20-06984-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e8d4/7729609/8baa2bdfa9f3/sensors-20-06984-g008.jpg

相似文献

1
A Framework of Combining Short-Term Spatial/Frequency Feature Extraction and Long-Term IndRNN for Activity Recognition.基于短期时空特征提取和长时 IndRNN 的活动识别框架。
Sensors (Basel). 2020 Dec 7;20(23):6984. doi: 10.3390/s20236984.
2
LSTM Networks Using Smartphone Data for Sensor-Based Human Activity Recognition in Smart Homes.基于智能手机数据的 LSTM 网络在智能家居中用于基于传感器的人体活动识别。
Sensors (Basel). 2021 Feb 26;21(5):1636. doi: 10.3390/s21051636.
3
Transportation Mode Detection Combining CNN and Vision Transformer with Sensors Recalibration Using Smartphone Built-In Sensors.利用智能手机内置传感器进行传感器重新校准的结合 CNN 和 Vision Transformer 的传输模式检测
Sensors (Basel). 2022 Aug 26;22(17):6453. doi: 10.3390/s22176453.
4
A Study of Two-Way Short- and Long-Term Memory Network Intelligent Computing IoT Model-Assisted Home Education Attention Mechanism.双向短时和长时记忆网络智能计算物联网模型辅助家庭教育注意力机制的研究。
Comput Intell Neurosci. 2021 Dec 21;2021:3587884. doi: 10.1155/2021/3587884. eCollection 2021.
5
Position-Aware Indoor Human Activity Recognition Using Multisensors Embedded in Smartphones.基于智能手机中嵌入的多传感器的位置感知室内人体活动识别
Sensors (Basel). 2024 May 24;24(11):3367. doi: 10.3390/s24113367.
6
Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation.基于深度学习的行人导航实时人体活动识别。
Sensors (Basel). 2020 Apr 30;20(9):2574. doi: 10.3390/s20092574.
7
Intelligent Localization and Deep Human Activity Recognition through IoT Devices.通过物联网设备实现智能定位和深度人体活动识别。
Sensors (Basel). 2023 Aug 23;23(17):7363. doi: 10.3390/s23177363.
8
Smartphone Based Human Activity Recognition with Feature Selection and Dense Neural Network.基于智能手机的人类活动识别与特征选择及密集神经网络
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:5888-5891. doi: 10.1109/EMBC44109.2020.9176239.
9
An Efficient and Lightweight Deep Learning Model for Human Activity Recognition Using Smartphones.基于智能手机的高效轻量级深度学习模型的人类活动识别
Sensors (Basel). 2021 Jun 2;21(11):3845. doi: 10.3390/s21113845.
10
Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization.基于生物传感器的物联网可穿戴设备,实现精准的身体运动跟踪和定位。
Sensors (Basel). 2024 May 10;24(10):3032. doi: 10.3390/s24103032.

引用本文的文献

1
MaskDGNets: Masked-attention guided dynamic graph aggregation network for event extraction.MaskDGNets:用于事件抽取的掩码注意力引导动态图聚合网络。
PLoS One. 2024 Nov 15;19(11):e0306673. doi: 10.1371/journal.pone.0306673. eCollection 2024.
2
Ensemble of RNN Classifiers for Activity Detection Using a Smartphone and Supporting Nodes.基于智能手机和辅助节点的活动检测用 RNN 分类器集成。
Sensors (Basel). 2022 Dec 3;22(23):9451. doi: 10.3390/s22239451.
3
Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion.

本文引用的文献

1
REAL-Time Smartphone Activity Classification Using Inertial Sensors-Recognition of Scrolling, Typing, and Watching Videos While Sitting or Walking.基于惯性传感器的智能手机实时活动分类——识别坐姿或步行时的滚动、打字和观看视频行为。
Sensors (Basel). 2020 Jan 24;20(3):655. doi: 10.3390/s20030655.
2
Smartphone-Based Activity Recognition for Indoor Localization Using a Convolutional Neural Network.基于卷积神经网络的智能手机室内定位活动识别。
Sensors (Basel). 2019 Feb 1;19(3):621. doi: 10.3390/s19030621.
3
A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.
人类行为识别:最佳深度学习特征选择和基于序列的扩展融合范例。
Sensors (Basel). 2021 Nov 28;21(23):7941. doi: 10.3390/s21237941.
4
A systematic review of smartphone-based human activity recognition methods for health research.一项针对健康研究中基于智能手机的人类活动识别方法的系统综述。
NPJ Digit Med. 2021 Oct 18;4(1):148. doi: 10.1038/s41746-021-00514-4.
一种用于移动或可穿戴设备的节点传感器数据分析的深度学习方法。
IEEE J Biomed Health Inform. 2017 Jan;21(1):56-64. doi: 10.1109/JBHI.2016.2633287. Epub 2016 Dec 23.
4
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.用于多模态可穿戴活动识别的深度卷积和长短期记忆循环神经网络
Sensors (Basel). 2016 Jan 18;16(1):115. doi: 10.3390/s16010115.
5
SVM-based multimodal classification of activities of daily living in Health Smart Homes: sensors, algorithms, and first experimental results.基于支持向量机的健康智能家居中日常生活活动多模态分类:传感器、算法及初步实验结果
IEEE Trans Inf Technol Biomed. 2010 Mar;14(2):274-83. doi: 10.1109/TITB.2009.2037317. Epub 2009 Dec 11.
6
Activity classification using realistic data from wearable sensors.使用可穿戴传感器的实际数据进行活动分类。
IEEE Trans Inf Technol Biomed. 2006 Jan;10(1):119-28. doi: 10.1109/titb.2005.856863.