• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于混合深度学习的人体活动识别。

Human Activity Recognition via Hybrid Deep Learning Based Model.

机构信息

Mixed Reality and Interaction Lab, Department of Software, Sejong University, Seoul 05006, Korea.

出版信息

Sensors (Basel). 2022 Jan 1;22(1):323. doi: 10.3390/s22010323.

DOI:10.3390/s22010323
PMID:35009865
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8749555/
Abstract

In recent years, Human Activity Recognition (HAR) has become one of the most important research topics in the domains of health and human-machine interaction. Many Artificial intelligence-based models are developed for activity recognition; however, these algorithms fail to extract spatial and temporal features due to which they show poor performance on real-world long-term HAR. Furthermore, in literature, a limited number of datasets are publicly available for physical activities recognition that contains less number of activities. Considering these limitations, we develop a hybrid model by incorporating Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) for activity recognition where CNN is used for spatial features extraction and LSTM network is utilized for learning temporal information. Additionally, a new challenging dataset is generated that is collected from 20 participants using the Kinect V2 sensor and contains 12 different classes of human physical activities. An extensive ablation study is performed over different traditional machine learning and deep learning models to obtain the optimum solution for HAR. The accuracy of 90.89% is achieved via the CNN-LSTM technique, which shows that the proposed model is suitable for HAR applications.

摘要

近年来,人体活动识别(HAR)已成为健康和人机交互领域中最重要的研究课题之一。许多基于人工智能的模型被开发用于活动识别;然而,由于这些算法无法提取空间和时间特征,因此在实际的长期 HAR 中表现不佳。此外,在文献中,可用于物理活动识别的公开数据集数量有限,其中包含的活动数量较少。考虑到这些限制,我们通过结合卷积神经网络(CNN)和长短时记忆网络(LSTM)来开发一种混合模型,用于活动识别,其中 CNN 用于提取空间特征,LSTM 网络用于学习时间信息。此外,还生成了一个新的具有挑战性的数据集,该数据集是使用 Kinect V2 传感器从 20 名参与者那里收集的,包含 12 种不同类别的人体物理活动。对不同的传统机器学习和深度学习模型进行了广泛的消融研究,以获得 HAR 的最佳解决方案。通过 CNN-LSTM 技术实现了 90.89%的准确率,表明所提出的模型适用于 HAR 应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/76edecebfbd2/sensors-22-00323-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/357e616e53c8/sensors-22-00323-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/db917ee6b0bd/sensors-22-00323-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/17ce1b752376/sensors-22-00323-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/d2daa2f9adec/sensors-22-00323-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/befdbaef1708/sensors-22-00323-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/525f83c36b12/sensors-22-00323-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/7e69bab2b065/sensors-22-00323-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/76edecebfbd2/sensors-22-00323-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/357e616e53c8/sensors-22-00323-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/db917ee6b0bd/sensors-22-00323-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/17ce1b752376/sensors-22-00323-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/d2daa2f9adec/sensors-22-00323-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/befdbaef1708/sensors-22-00323-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/525f83c36b12/sensors-22-00323-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/7e69bab2b065/sensors-22-00323-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e16a/8749555/76edecebfbd2/sensors-22-00323-g008.jpg

相似文献

1
Human Activity Recognition via Hybrid Deep Learning Based Model.基于混合深度学习的人体活动识别。
Sensors (Basel). 2022 Jan 1;22(1):323. doi: 10.3390/s22010323.
2
LSTM Networks Using Smartphone Data for Sensor-Based Human Activity Recognition in Smart Homes.基于智能手机数据的 LSTM 网络在智能家居中用于基于传感器的人体活动识别。
Sensors (Basel). 2021 Feb 26;21(5):1636. doi: 10.3390/s21051636.
3
Deep Learning-Based Human Activity Real-Time Recognition for Pedestrian Navigation.基于深度学习的行人导航实时人体活动识别。
Sensors (Basel). 2020 Apr 30;20(9):2574. doi: 10.3390/s20092574.
4
Sensor-Based Human Activity Recognition with Spatio-Temporal Deep Learning.基于传感器的人机活动识别的时空深度学习。
Sensors (Basel). 2021 Mar 18;21(6):2141. doi: 10.3390/s21062141.
5
An Efficient and Lightweight Deep Learning Model for Human Activity Recognition Using Smartphones.基于智能手机的高效轻量级深度学习模型的人类活动识别
Sensors (Basel). 2021 Jun 2;21(11):3845. doi: 10.3390/s21113845.
6
Achieving More with Less: A Lightweight Deep Learning Solution for Advanced Human Activity Recognition (HAR).以更少的资源实现更多:高级人体活动识别的轻量级深度学习解决方案。
Sensors (Basel). 2024 Aug 22;24(16):5436. doi: 10.3390/s24165436.
7
Deep Learning for Human Activity Recognition on 3D Human Skeleton: Survey and Comparative Study.基于 3D 人体骨骼的人类活动识别深度学习:综述与比较研究。
Sensors (Basel). 2023 May 27;23(11):5121. doi: 10.3390/s23115121.
8
Ensem-HAR: An Ensemble Deep Learning Model for Smartphone Sensor-Based Human Activity Recognition for Measurement of Elderly Health Monitoring.基于智能手机传感器的人类活动识别的集成深度学习模型 Ensem-HAR:用于测量老年人健康监测。
Biosensors (Basel). 2022 Jun 7;12(6):393. doi: 10.3390/bios12060393.
9
An improved human activity recognition technique based on convolutional neural network.基于卷积神经网络的改进型人体活动识别技术。
Sci Rep. 2023 Dec 19;13(1):22581. doi: 10.1038/s41598-023-49739-1.
10
A CSI-Based Human Activity Recognition Using Deep Learning.基于 CSI 的深度学习人体活动识别。
Sensors (Basel). 2021 Oct 30;21(21):7225. doi: 10.3390/s21217225.

引用本文的文献

1
The machine learning algorithm based on decision tree optimization for pattern recognition in track and field sports.基于决策树优化的机器学习算法在田径运动模式识别中的应用
PLoS One. 2025 Feb 13;20(2):e0317414. doi: 10.1371/journal.pone.0317414. eCollection 2025.
2
Dual-Stream Architecture Enhanced by Soft-Attention Mechanism for Plant Species Classification.基于软注意力机制增强的双流架构用于植物物种分类
Plants (Basel). 2024 Sep 22;13(18):2655. doi: 10.3390/plants13182655.
3
Prediction of white blood cell count during exercise: a comparison between standalone and hybrid intelligent algorithms.

本文引用的文献

1
Atrous Convolutions and Residual GRU Based Architecture for Matching Power Demand with Supply.多孔卷积和基于剩余 GRU 的架构,用于匹配供需电力。
Sensors (Basel). 2021 Oct 29;21(21):7191. doi: 10.3390/s21217191.
2
Light-DehazeNet: A Novel Lightweight CNN Architecture for Single Image Dehazing.轻雾去除网络:一种用于单图像去雾的新型轻量级卷积神经网络架构
IEEE Trans Image Process. 2021;30:8968-8982. doi: 10.1109/TIP.2021.3116790. Epub 2021 Nov 2.
3
An Efficient Anomaly Recognition Framework Using an Attention Residual LSTM in Surveillance Videos.
运动期间白细胞计数的预测:独立智能算法与混合智能算法的比较。
Sci Rep. 2024 Sep 5;14(1):20683. doi: 10.1038/s41598-024-71576-z.
4
Wearable sensors based on artificial intelligence models for human activity recognition.基于人工智能模型的可穿戴传感器用于人类活动识别。
Front Artif Intell. 2024 Jun 27;7:1424190. doi: 10.3389/frai.2024.1424190. eCollection 2024.
5
PAR-Net: An Enhanced Dual-Stream CNN-ESN Architecture for Human Physical Activity Recognition.PAR-Net:一种用于人体活动识别的增强型双通道 CNN-ESN 架构。
Sensors (Basel). 2024 Mar 16;24(6):1908. doi: 10.3390/s24061908.
6
Wearable Sensor-Based Residual Multifeature Fusion Shrinkage Networks for Human Activity Recognition.基于可穿戴传感器的残差多特征融合收缩网络的人体活动识别。
Sensors (Basel). 2024 Jan 24;24(3):758. doi: 10.3390/s24030758.
7
Deep Wavelet Convolutional Neural Networks for Multimodal Human Activity Recognition Using Wearable Inertial Sensors.基于可穿戴惯性传感器的多模态人体活动识别的深度小波卷积神经网络
Sensors (Basel). 2023 Dec 9;23(24):9721. doi: 10.3390/s23249721.
8
Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot.用于可穿戴外骨骼机器人的人体运动活动识别的多变量卷积神经网络模型
Bioengineering (Basel). 2023 Sep 13;10(9):1082. doi: 10.3390/bioengineering10091082.
9
Hybrid Models Based on Fusion Features of a CNN and Handcrafted Features for Accurate Histopathological Image Analysis for Diagnosing Malignant Lymphomas.基于卷积神经网络融合特征与手工特征的混合模型用于准确的组织病理学图像分析以诊断恶性淋巴瘤
Diagnostics (Basel). 2023 Jul 4;13(13):2258. doi: 10.3390/diagnostics13132258.
10
Recognition of human action for scene understanding using world cup optimization and transfer learning approach.使用世界杯优化和迁移学习方法进行场景理解的人类行为识别
PeerJ Comput Sci. 2023 May 23;9:e1396. doi: 10.7717/peerj-cs.1396. eCollection 2023.
基于注意力残差 LSTM 的监控视频高效异常识别框架
Sensors (Basel). 2021 Apr 16;21(8):2811. doi: 10.3390/s21082811.
4
Symbiotic Graph Neural Networks for 3D Skeleton-Based Human Action Recognition and Motion Prediction.基于共生图神经网络的 3D 骨骼人类动作识别与运动预测。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):3316-3333. doi: 10.1109/TPAMI.2021.3053765. Epub 2022 May 5.
5
Towards Efficient Building Designing: Heating and Cooling Load Prediction via Multi-Output Model.迈向高效的建筑设计:通过多输出模型进行加热和冷却负荷预测。
Sensors (Basel). 2020 Nov 10;20(22):6419. doi: 10.3390/s20226419.
6
Towards Efficient Electricity Forecasting in Residential and Commercial Buildings: A Novel Hybrid CNN with a LSTM-AE based Framework.面向住宅和商业建筑的高效电力预测:基于新型 CNN 与 LSTM-AE 的混合框架。
Sensors (Basel). 2020 Mar 4;20(5):1399. doi: 10.3390/s20051399.
7
Iss2Image: A Novel Signal-Encoding Technique for CNN-Based Human Activity Recognition.Iss2Image:一种基于 CNN 的人类活动识别的新型信号编码技术。
Sensors (Basel). 2018 Nov 13;18(11):3910. doi: 10.3390/s18113910.
8
Depth-Camera-Based System for Estimating Energy Expenditure of Physical Activities in Gyms.基于深度摄像机的健身房体力活动能量消耗估算系统。
IEEE J Biomed Health Inform. 2019 May;23(3):1086-1095. doi: 10.1109/JBHI.2018.2840834. Epub 2018 Jun 1.
9
A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data.基于骨骼数据动态聚类的人体活动识别系统。
Sensors (Basel). 2017 May 11;17(5):1100. doi: 10.3390/s17051100.
10
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.