• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用世界杯优化和迁移学习方法进行场景理解的人类行为识别

Recognition of human action for scene understanding using world cup optimization and transfer learning approach.

作者信息

Surendran Ranjini, J Anitha, Hemanth Jude D

机构信息

Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India.

出版信息

PeerJ Comput Sci. 2023 May 23;9:e1396. doi: 10.7717/peerj-cs.1396. eCollection 2023.

DOI:10.7717/peerj-cs.1396
PMID:37346707
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10280426/
Abstract

Understanding human activities is one of the vital steps in visual scene recognition. Human daily activities include diverse scenes with multiple objects having complex interrelationships with each other. Representation of human activities finds application in areas such as surveillance, health care systems, entertainment, automated patient monitoring systems, and so on. Our work focuses on classifying scenes into different classes of human activities like waving hands, gardening, walking, running, . The dataset classes were pre-processed using the fuzzy color stacking technique. We adopted the transfer learning concept of pretrained deep CNN models. Our proposed methodology employs pretrained AlexNet, SqueezeNet, ResNet, and DenseNet for feature extraction. The adaptive World Cup Optimization (WCO) algorithm is used halfway to select the superior dominant features. Then, these dominant features are classified by the fully connected classifier layer of DenseNet 201. Evaluation of the performance matrices showed an accuracy of 94.7% with DenseNet as the feature extractor and WCO for feature selection compared to other models. Also, our proposed methodology proved to be superior to its counterpart without feature selection. Thus, we could improve the quality of the classification model by providing double filtering using the WCO feature selection process.

摘要

理解人类活动是视觉场景识别的关键步骤之一。人类日常活动包括各种场景,其中多个物体之间存在复杂的相互关系。人类活动的表示在诸如监控、医疗保健系统、娱乐、自动患者监测系统等领域有应用。我们的工作重点是将场景分类为不同类别的人类活动,如挥手、园艺、行走、跑步等。数据集类别使用模糊颜色堆叠技术进行了预处理。我们采用了预训练深度卷积神经网络(CNN)模型的迁移学习概念。我们提出的方法使用预训练的AlexNet、SqueezeNet、ResNet和DenseNet进行特征提取。中途使用自适应世界杯优化(WCO)算法来选择 superior 主导特征。然后,这些主导特征由DenseNet 201的全连接分类器层进行分类。性能矩阵评估显示,与其他模型相比,以DenseNet作为特征提取器并使用WCO进行特征选择时,准确率达到94.7%。此外,我们提出的方法被证明优于没有特征选择的对应方法。因此,通过使用WCO特征选择过程提供双重过滤,我们可以提高分类模型的质量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/9da1d72df97b/peerj-cs-09-1396-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/8b71c3a3bde9/peerj-cs-09-1396-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/fa0fce9f09a7/peerj-cs-09-1396-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/34e3dac6c208/peerj-cs-09-1396-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/81d374ce055d/peerj-cs-09-1396-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/6f39d83d5607/peerj-cs-09-1396-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/51d53affd5b3/peerj-cs-09-1396-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/02cc0bb801b6/peerj-cs-09-1396-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/75f6a416c5ea/peerj-cs-09-1396-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/acd08ee1bcb9/peerj-cs-09-1396-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/83883591739f/peerj-cs-09-1396-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/afb7c59e30e9/peerj-cs-09-1396-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/08a0ed17a350/peerj-cs-09-1396-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/152956b6000e/peerj-cs-09-1396-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/c41d8738e9ce/peerj-cs-09-1396-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/e4de08813084/peerj-cs-09-1396-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/9a03d12ae0c3/peerj-cs-09-1396-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/eef6ab684731/peerj-cs-09-1396-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/6fbd981acedb/peerj-cs-09-1396-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/171d96f14c7c/peerj-cs-09-1396-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/9da1d72df97b/peerj-cs-09-1396-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/8b71c3a3bde9/peerj-cs-09-1396-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/fa0fce9f09a7/peerj-cs-09-1396-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/34e3dac6c208/peerj-cs-09-1396-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/81d374ce055d/peerj-cs-09-1396-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/6f39d83d5607/peerj-cs-09-1396-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/51d53affd5b3/peerj-cs-09-1396-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/02cc0bb801b6/peerj-cs-09-1396-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/75f6a416c5ea/peerj-cs-09-1396-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/acd08ee1bcb9/peerj-cs-09-1396-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/83883591739f/peerj-cs-09-1396-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/afb7c59e30e9/peerj-cs-09-1396-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/08a0ed17a350/peerj-cs-09-1396-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/152956b6000e/peerj-cs-09-1396-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/c41d8738e9ce/peerj-cs-09-1396-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/e4de08813084/peerj-cs-09-1396-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/9a03d12ae0c3/peerj-cs-09-1396-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/eef6ab684731/peerj-cs-09-1396-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/6fbd981acedb/peerj-cs-09-1396-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/171d96f14c7c/peerj-cs-09-1396-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c117/10280426/9da1d72df97b/peerj-cs-09-1396-g020.jpg

相似文献

1
Recognition of human action for scene understanding using world cup optimization and transfer learning approach.使用世界杯优化和迁移学习方法进行场景理解的人类行为识别
PeerJ Comput Sci. 2023 May 23;9:e1396. doi: 10.7717/peerj-cs.1396. eCollection 2023.
2
Automated detection of leukemia by pretrained deep neural networks and transfer learning: A comparison.基于预训练深度神经网络和迁移学习的白血病自动检测:比较研究。
Med Eng Phys. 2021 Dec;98:8-19. doi: 10.1016/j.medengphy.2021.10.006. Epub 2021 Oct 13.
3
Optimized deep learning vision system for human action recognition from drone images.用于从无人机图像中进行人体动作识别的优化深度学习视觉系统。
Multimed Tools Appl. 2023 Jun 2:1-22. doi: 10.1007/s11042-023-15930-9.
4
Few-shot cotton leaf spots disease classification based on metric learning.基于度量学习的少样本棉花叶斑病分类
Plant Methods. 2021 Nov 8;17(1):114. doi: 10.1186/s13007-021-00813-7.
5
Colon Disease Diagnosis with Convolutional Neural Network and Grasshopper Optimization Algorithm.基于卷积神经网络和蚱蜢优化算法的结肠疾病诊断
Diagnostics (Basel). 2023 May 12;13(10):1728. doi: 10.3390/diagnostics13101728.
6
Transfer of Learning in the Convolutional Neural Networks on Classifying Geometric Shapes Based on Local or Global Invariants.基于局部或全局不变量的卷积神经网络在几何形状分类中的学习迁移
Front Comput Neurosci. 2021 Feb 19;15:637144. doi: 10.3389/fncom.2021.637144. eCollection 2021.
7
A full convolutional network based on DenseNet for remote sensing scene classification.基于 DenseNet 的全卷积网络用于遥感场景分类。
Math Biosci Eng. 2019 Apr 18;16(5):3345-3367. doi: 10.3934/mbe.2019167.
8
Segmentation and Classification of Glaucoma Using U-Net with Deep Learning Model.基于深度学习模型的 U-Net 在青光眼分割与分类中的应用。
J Healthc Eng. 2022 Feb 16;2022:1601354. doi: 10.1155/2022/1601354. eCollection 2022.
9
Urban Tree Species Classification Using a WorldView-2/3 and LiDAR Data Fusion Approach and Deep Learning.利用 WorldView-2/3 和 LiDAR 数据融合方法及深度学习进行城市树种分类
Sensors (Basel). 2019 Mar 14;19(6):1284. doi: 10.3390/s19061284.
10
The Role of Knowledge Creation-Oriented Convolutional Neural Network in Learning Interaction.面向知识创造的卷积神经网络在学习交互中的作用。
Comput Intell Neurosci. 2022 Mar 16;2022:6493311. doi: 10.1155/2022/6493311. eCollection 2022.

本文引用的文献

1
Human Activity Recognition via Hybrid Deep Learning Based Model.基于混合深度学习的人体活动识别。
Sensors (Basel). 2022 Jan 1;22(1):323. doi: 10.3390/s22010323.
2
Model selection for within-batch effect correction in UPLC-MS metabolomics using quality control - Support vector regression.UPLC-MS 代谢组学中使用质量控制进行批内效应校正的模型选择 - 支持向量回归。
Anal Chim Acta. 2018 Oct 5;1026:62-68. doi: 10.1016/j.aca.2018.04.055. Epub 2018 Apr 23.
3
A Hybrid Neural Network - World Cup Optimization Algorithm for Melanoma Detection.
一种用于黑色素瘤检测的混合神经网络 - 世界杯优化算法
Open Med (Wars). 2018 Mar 15;13:9-16. doi: 10.1515/med-2018-0002. eCollection 2018.
4
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.DeepLab:基于深度卷积网络、空洞卷积和全连接条件随机场的语义图像分割。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.
5
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.
6
Action Recognition in Still Images With Minimum Annotation Efforts.以最少标注工作量实现静止图像中的动作识别
IEEE Trans Image Process. 2016 Nov;25(11):5479-5490. doi: 10.1109/TIP.2016.2605305. Epub 2016 Sep 1.
7
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
8
Anticipating Human Activities Using Object Affordances for Reactive Robotic Response.使用物体功能来预测人类活动,以实现机器人的反应式响应。
IEEE Trans Pattern Anal Mach Intell. 2016 Jan;38(1):14-29. doi: 10.1109/TPAMI.2015.2430335.
9
Recognizing Actions Through Action-Specific Person Detection.通过特定于动作的人物检测来识别动作。
IEEE Trans Image Process. 2015 Nov;24(11):4422-32. doi: 10.1109/TIP.2015.2465147. Epub 2015 Aug 5.
10
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.