Suppr超能文献

基于卷积神经网络的人类动作识别中的多级特征融合:以EfficientNet-B7为例

Multi-Level Feature Fusion in CNN-Based Human Action Recognition: A Case Study on EfficientNet-B7.

作者信息

Lueangwitchajaroen Pitiwat, Watcharapinchai Sitapa, Tepsan Worawit, Sooksatra Sorn

机构信息

National Electronic and Computer Technology Center, National Science and Technology Development Agency, Khlong Luang, Pathum Thani 12120, Thailand.

International College of Digital Innovation, Chiang Mai University, Mueang Chiang Mai, Chiang Mai 50200, Thailand.

出版信息

J Imaging. 2024 Dec 12;10(12):320. doi: 10.3390/jimaging10120320.

Abstract

Accurate human action recognition is becoming increasingly important across various fields, including healthcare and self-driving cars. A simple approach to enhance model performance is incorporating additional data modalities, such as depth frames, point clouds, and skeleton information, while previous studies have predominantly used late fusion techniques to combine these modalities, our research introduces a multi-level fusion approach that combines information at early, intermediate, and late stages together. Furthermore, recognizing the challenges of collecting multiple data types in real-world applications, our approach seeks to exploit multimodal techniques while relying solely on RGB frames as the single data source. In our work, we used RGB frames from the NTU RGB+D dataset as the sole data source. From these frames, we extracted 2D skeleton coordinates and optical flow frames using pre-trained models. We evaluated our multi-level fusion approach with EfficientNet-B7 as a case study, and our methods demonstrated significant improvement, achieving 91.5% in NTU RGB+D 60 dataset accuracy compared to single-modality and single-view models. Despite their simplicity, our methods are also comparable to other state-of-the-art approaches.

摘要

准确的人类行为识别在包括医疗保健和自动驾驶汽车在内的各个领域正变得越来越重要。一种提高模型性能的简单方法是纳入额外的数据模态,如深度帧、点云及骨骼信息。虽然先前的研究主要使用后期融合技术来组合这些模态,但我们的研究引入了一种多级别融合方法,该方法将早期、中期和后期阶段的信息结合在一起。此外,认识到在实际应用中收集多种数据类型的挑战,我们的方法旨在利用多模态技术,同时仅依赖RGB帧作为单一数据源。在我们的工作中,我们使用来自NTU RGB+D数据集的RGB帧作为唯一数据源。从这些帧中,我们使用预训练模型提取了二维骨骼坐标和光流帧。我们以EfficientNet-B7为例评估了我们的多级别融合方法,我们的方法显示出显著改进,在NTU RGB+D 60数据集中相比单模态和单视图模型准确率达到了91.5%。尽管我们的方法很简单,但也可与其他最先进的方法相媲美。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e06/11677249/06a0b0e119f0/jimaging-10-00320-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验