Suppr超能文献

基于混合特征和多类支持向量机的用于自动识别人类活动的视觉传感器

Vision Sensor for Automatic Recognition of Human Activities via Hybrid Features and Multi-Class Support Vector Machine.

作者信息

Kamal Saleha, Alhasson Haifa F, Alnusayri Mohammed, Alatiyyah Mohammed, Aljuaid Hanan, Jalal Ahmad, Liu Hui

机构信息

Department of Computer Science, Air University, Islamabad 44000, Pakistan.

Department of Information Technology, College of Computer, Qassim University, Buraydah 52571, Saudi Arabia.

出版信息

Sensors (Basel). 2025 Jan 1;25(1):200. doi: 10.3390/s25010200.

Abstract

Over recent years, automated Human Activity Recognition (HAR) has been an area of concern for many researchers due to its widespread application in surveillance systems, healthcare environments, and many more. This has led researchers to develop coherent and robust systems that efficiently perform HAR. Although there have been many efficient systems developed to date, still, there are many issues to be addressed. There are several elements that contribute to the complexity of the task, making it more challenging to detect human activities, i.e., (i) poor lightning conditions; (ii) different viewing angles; (iii) intricate clothing styles; (iv) diverse activities with similar gestures; and (v) limited availability of large datasets. However, through effective feature extraction, we can develop resilient systems for higher accuracies. During feature extraction, we aim to extract unique key body points and full-body features that exhibit distinct attributes for each activity. Our proposed system introduces an innovative approach for the identification of human activity in outdoor and indoor settings by extracting effective spatio-temporal features, along with a Multi-Class Support Vector Machine, which enhances the model's performance to accurately identify the activity classes. The experimental findings show that our model outperforms others in terms of classification, accuracy, and generalization, indicating its efficient analysis on benchmark datasets. Various performance metrics, including mean recognition accuracy, precision, F1 score, and recall assess the effectiveness of our model. The assessment findings show a remarkable recognition rate of around 88.61%, 87.33, 86.5%, and 81.25% on the BIT-Interaction dataset, UT-Interaction dataset, NTU RGB + D 120 dataset, and PKUMMD dataset, respectively.

摘要

近年来,自动人体活动识别(HAR)因其在监控系统、医疗环境等诸多领域的广泛应用,成为众多研究人员关注的领域。这促使研究人员开发出连贯且强大的系统,以高效地执行人体活动识别。尽管迄今为止已经开发出了许多高效的系统,但仍有许多问题有待解决。有几个因素导致了这项任务的复杂性,使得检测人类活动更具挑战性,即:(i)光线条件差;(ii)不同的视角;(iii)复杂的服装款式;(iv)具有相似手势的多样活动;以及(v)大型数据集的可用性有限。然而,通过有效的特征提取,我们可以开发出具有更高准确率的弹性系统。在特征提取过程中,我们旨在提取独特的关键身体点和全身特征,这些特征对每种活动都表现出不同的属性。我们提出的系统引入了一种创新方法,通过提取有效的时空特征以及多类支持向量机,在室外和室内环境中识别人类活动,这提高了模型的性能,能够准确识别活动类别。实验结果表明,我们的模型在分类、准确率和泛化能力方面优于其他模型,表明其在基准数据集上的有效分析能力。包括平均识别准确率、精确率、F1分数和召回率在内的各种性能指标评估了我们模型的有效性。评估结果显示,在BIT-Interaction数据集、UT-Interaction数据集、NTU RGB + D 120数据集和PKUMMD数据集上的识别率分别约为88.61%、87.33%、86.5%和81.25%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fbd8/11723259/ea2b4f74bc72/sensors-25-00200-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验