• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用多层卷积神经网络检测人类活动。

Detection of human activities using multi-layer convolutional neural network.

作者信息

Abdellatef Essam, Al-Makhlasawy Rasha M, Shalaby Wafaa A

机构信息

Department of Electrical Engineering, Faculty of Engineering, Sinai University, El-Arish, 45511, Egypt.

Electronics Research Institute, Joseph Tito St, El Nozha, P.O. Box: 12622, Cairo, Cairo, Egypt.

出版信息

Sci Rep. 2025 Feb 27;15(1):7004. doi: 10.1038/s41598-025-90307-6.

DOI:10.1038/s41598-025-90307-6
PMID:40016243
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11868505/
Abstract

Human Activity Recognition (HAR) plays a critical role in fields such as healthcare, sports, and human-computer interaction. However, achieving high accuracy and robustness remains a challenge, particularly when dealing with noisy sensor data from accelerometers and gyroscopes. This paper introduces HARCNN, a novel approach leveraging Convolutional Neural Networks (CNNs) to extract hierarchical spatial and temporal features from raw sensor data, enhancing activity recognition performance. The HARCNN model is designed with 10 convolutional blocks, referred to as "ConvBlk." Each block integrates a convolutional layer, a ReLU activation function, and a batch normalization layer. The outputs from specific blocks "ConvBlk_3 and ConvBlk_4," "ConvBlk_6 and ConvBlk_7," and "ConvBlk_9 and ConvBlk_10" are fused using a depth concatenation approach. The concatenated outputs are subsequently passed through a 2 × 2 max-pooling layer with a stride of 2 for further processing. The proposed HARCNN framework is evaluated using accuracy, precision, sensitivity, and f-score as key metrics, reflecting the model's ability to correctly classify and differentiate between human activities. The proposed model's performance is compared to traditional pre-trained Convolutional Neural Networks (CNNs) and other state-of-the-art techniques. By leveraging advanced feature extraction and optimized learning strategies, the proposed model demonstrates its efficacy in achieving accuracy of 97.87%, 99.12%, 96.58%, and 98.51% for various human activities datasets; UCI-HAR, KU-HAR, WISDM, and HMDB51, respectively. This comparison underscores the model's robustness, highlighting improvements in minimizing false positives and false negatives, which are crucial for real-world applications where reliable predictions are essential. The experiments were conducted with various window sizes (50ms, 100ms, 200ms, 500ms, 1s, and 2s). The results indicate that the proposed method achieves high accuracy and reliability across these different window sizes, highlighting its ability to adapt to varying temporal granularities without significant loss of performance. This demonstrates the method's effectiveness and robustness, making it well-suited for deployment in diverse HAR scenarios. Notably, the best results were obtained with a window size of 200ms.

摘要

人类活动识别(HAR)在医疗保健、体育和人机交互等领域发挥着关键作用。然而,要实现高精度和鲁棒性仍然是一项挑战,尤其是在处理来自加速度计和陀螺仪的噪声传感器数据时。本文介绍了HARCNN,这是一种利用卷积神经网络(CNN)从原始传感器数据中提取分层空间和时间特征的新颖方法,可提高活动识别性能。HARCNN模型设计有10个卷积块,称为“ConvBlk”。每个块集成了一个卷积层、一个ReLU激活函数和一个批量归一化层。来自特定块“ConvBlk_3和ConvBlk_4”、“ConvBlk_6和ConvBlk_7”以及“ConvBlk_9和ConvBlk_10”的输出使用深度拼接方法进行融合。拼接后的输出随后通过一个步长为2的2×2最大池化层进行进一步处理。所提出的HARCNN框架使用准确率、精确率、灵敏度和F分数作为关键指标进行评估,反映了模型正确分类和区分人类活动的能力。将所提出模型的性能与传统预训练卷积神经网络(CNN)和其他先进技术进行了比较。通过利用先进的特征提取和优化的学习策略,所提出的模型在各种人类活动数据集(UCI-HAR、KU-HAR、WISDM和HMDB51)上分别实现了97.87%、99.12%、96.58%和98.51%的准确率,证明了其有效性。这种比较突出了模型的鲁棒性,强调了在最小化误报和漏报方面的改进,这对于需要可靠预测的实际应用至关重要。实验在各种窗口大小(50毫秒、100毫秒、200毫秒、500毫秒、1秒和2秒)下进行。结果表明,所提出的方法在这些不同的窗口大小下都实现了高精度和可靠性,突出了其适应不同时间粒度而不会显著损失性能的能力。这证明了该方法的有效性和鲁棒性,使其非常适合部署在各种HAR场景中。值得注意的是,窗口大小为200毫秒时获得了最佳结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/26de764f7390/41598_2025_90307_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/baa9d6a2db8e/41598_2025_90307_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/388e22ae46ad/41598_2025_90307_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/a362e2ecf206/41598_2025_90307_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/ff8053e1439f/41598_2025_90307_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/264d89d927fb/41598_2025_90307_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/5e9a02d9c0b2/41598_2025_90307_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/439e7b28cf4e/41598_2025_90307_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/88d93c83b293/41598_2025_90307_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/9d3e625d908b/41598_2025_90307_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/4fe6fae63a8d/41598_2025_90307_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/fd12609d6fc8/41598_2025_90307_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/e012877a9a18/41598_2025_90307_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/7239e6ff0788/41598_2025_90307_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/2756c20578ad/41598_2025_90307_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/2be0591e14ba/41598_2025_90307_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/fcf8615f4a15/41598_2025_90307_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/26de764f7390/41598_2025_90307_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/baa9d6a2db8e/41598_2025_90307_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/388e22ae46ad/41598_2025_90307_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/a362e2ecf206/41598_2025_90307_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/ff8053e1439f/41598_2025_90307_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/264d89d927fb/41598_2025_90307_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/5e9a02d9c0b2/41598_2025_90307_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/439e7b28cf4e/41598_2025_90307_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/88d93c83b293/41598_2025_90307_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/9d3e625d908b/41598_2025_90307_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/4fe6fae63a8d/41598_2025_90307_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/fd12609d6fc8/41598_2025_90307_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/e012877a9a18/41598_2025_90307_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/7239e6ff0788/41598_2025_90307_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/2756c20578ad/41598_2025_90307_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/2be0591e14ba/41598_2025_90307_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/fcf8615f4a15/41598_2025_90307_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5220/11868505/26de764f7390/41598_2025_90307_Fig17_HTML.jpg

相似文献

1
Detection of human activities using multi-layer convolutional neural network.使用多层卷积神经网络检测人类活动。
Sci Rep. 2025 Feb 27;15(1):7004. doi: 10.1038/s41598-025-90307-6.
2
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
3
Achieving More with Less: A Lightweight Deep Learning Solution for Advanced Human Activity Recognition (HAR).以更少的资源实现更多:高级人体活动识别的轻量级深度学习解决方案。
Sensors (Basel). 2024 Aug 22;24(16):5436. doi: 10.3390/s24165436.
4
Human Activity Recognition Using Attention-Mechanism-Based Deep Learning Feature Combination.基于注意力机制的深度学习特征组合的人体活动识别。
Sensors (Basel). 2023 Jun 19;23(12):5715. doi: 10.3390/s23125715.
5
Deep Wavelet Convolutional Neural Networks for Multimodal Human Activity Recognition Using Wearable Inertial Sensors.基于可穿戴惯性传感器的多模态人体活动识别的深度小波卷积神经网络
Sensors (Basel). 2023 Dec 9;23(24):9721. doi: 10.3390/s23249721.
6
Federated Learning for IoMT-Enhanced Human Activity Recognition with Hybrid LSTM-GRU Networks.基于混合长短期记忆门控循环单元网络的物联网增强型人类活动识别联合学习
Sensors (Basel). 2025 Feb 3;25(3):907. doi: 10.3390/s25030907.
7
The Convolutional Neural Networks Training With Channel-Selectivity for Human Activity Recognition Based on Sensors.基于传感器的人体活动识别的带通道选择性的卷积神经网络训练。
IEEE J Biomed Health Inform. 2021 Oct;25(10):3834-3843. doi: 10.1109/JBHI.2021.3092396. Epub 2021 Oct 5.
8
Innovative Dual-Decoupling CNN With Layer-Wise Temporal-Spatial Attention for Sensor-Based Human Activity Recognition.用于基于传感器的人类活动识别的具有分层时空注意力的创新双解耦卷积神经网络
IEEE J Biomed Health Inform. 2025 Feb;29(2):1035-1048. doi: 10.1109/JBHI.2024.3488528. Epub 2025 Feb 10.
9
Enhanced Pneumonia Detection in Chest X-Rays Using Hybrid Convolutional and Vision Transformer Networks.使用混合卷积和视觉Transformer网络增强胸部X光片中的肺炎检测
Curr Med Imaging. 2025;21:e15734056326685. doi: 10.2174/0115734056326685250101113959.
10
MSTCN: A multiscale temporal convolutional network for user independent human activity recognition.MSTCN:用于用户无关的人体活动识别的多尺度时间卷积网络。
F1000Res. 2021 Dec 8;10:1261. doi: 10.12688/f1000research.73175.2. eCollection 2021.

引用本文的文献

1
AIoT-Based Eyelash Extension Durability Evaluation Using LabVIEW Data Analysis.基于物联网的睫毛延长耐久性评估:使用LabVIEW数据分析
Sensors (Basel). 2025 Aug 14;25(16):5057. doi: 10.3390/s25165057.
2
Intelligent routing for human activity recognition in wireless body area networks.无线体域网中用于人类活动识别的智能路由
Sci Rep. 2025 Jul 29;15(1):27720. doi: 10.1038/s41598-025-12114-3.

本文引用的文献

1
Self-supervised learning for human activity recognition using 700,000 person-days of wearable data.使用70万人工日的可穿戴数据进行人类活动识别的自监督学习。
NPJ Digit Med. 2024 Apr 12;7(1):91. doi: 10.1038/s41746-024-01062-3.
2
Machine Learning for Human Motion Intention Detection.机器学习在人体运动意图检测中的应用
Sensors (Basel). 2023 Aug 16;23(16):7203. doi: 10.3390/s23167203.
3
Video-Based Human Activity Recognition Using Deep Learning Approaches.基于视频的深度学习人体活动识别。
Sensors (Basel). 2023 Jul 13;23(14):6384. doi: 10.3390/s23146384.
4
Human Activity Recognition Using Attention-Mechanism-Based Deep Learning Feature Combination.基于注意力机制的深度学习特征组合的人体活动识别。
Sensors (Basel). 2023 Jun 19;23(12):5715. doi: 10.3390/s23125715.
5
The Applications of Metaheuristics for Human Activity Recognition and Fall Detection Using Wearable Sensors: A Comprehensive Analysis.元启发式算法在可穿戴传感器人体活动识别和跌倒检测中的应用:综合分析。
Biosensors (Basel). 2022 Oct 3;12(10):821. doi: 10.3390/bios12100821.
6
Human activity recognition using tools of convolutional neural networks: A state of the art review, data sets, challenges, and future prospects.使用卷积神经网络工具进行人类活动识别:最新研究综述、数据集、挑战与未来展望。
Comput Biol Med. 2022 Oct;149:106060. doi: 10.1016/j.compbiomed.2022.106060. Epub 2022 Sep 1.
7
Applying Deep Learning-Based Human Motion Recognition System in Sports Competition.基于深度学习的人体运动识别系统在体育竞赛中的应用。
Front Neurorobot. 2022 May 20;16:860981. doi: 10.3389/fnbot.2022.860981. eCollection 2022.