Suppr超能文献

基于可穿戴混合传感器系统的自主体活动识别的分层深度融合框架。

A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition Using a Wearable Hybrid Sensor System.

机构信息

College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China.

Department of Electrical and Computer Engineering, University of Pittsburgh, PA 15261, USA.

出版信息

Sensors (Basel). 2019 Jan 28;19(3):546. doi: 10.3390/s19030546.

Abstract

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.

摘要

最近,由于自我中心活动识别在医疗保健、智能家居和安全监控等领域具有广泛的适用性,因此在模式识别和人工智能领域引起了相当大的关注。在这项研究中,我们开发并实现了一种基于深度学习的层次融合框架,用于识别可穿戴混合传感器系统(包括运动传感器和摄像机)中的自我中心日常生活活动(ADL)。长短期记忆(LSTM)和卷积神经网络分别用于基于运动传感器数据和照片流在不同层执行自我中心 ADL 识别。运动传感器数据仅用于根据运动状态进行活动分类,而照片流则用于在运动状态组中进行进一步的特定活动识别。因此,运动传感器数据和照片流都在最适合的分类模式下工作,从而显著降低了传感器差异对融合结果的负面影响。实验结果表明,所提出的方法不仅比现有的直接融合方法(高达 6%)更准确,而且还避免了现有方法中光流的耗时计算,这使得所提出的算法更简单,更适合实际应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb84/6386921/73aaaf873eee/sensors-19-00546-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验