• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度学习的视图不变特征用于跨视图动作识别。

Deeply Learned View-Invariant Features for Cross-View Action Recognition.

出版信息

IEEE Trans Image Process. 2017 Jun;26(6):3028-3037. doi: 10.1109/TIP.2017.2696786. Epub 2017 Apr 24.

DOI:10.1109/TIP.2017.2696786
PMID:28436876
Abstract

Classifying human actions from varied views is challenging due to huge data variations in different views. The key to this problem is to learn discriminative view-invariant features robust to view variations. In this paper, we address this problem by learning view-specific and view-shared features using novel deep models. View-specific features capture unique dynamics of each view while view-shared features encode common patterns across views. A novel sample-affinity matrix is introduced in learning shared features, which accurately balances information transfer within the samples from multiple views and limits the transfer across samples. This allows us to learn more discriminative shared features robust to view variations. In addition, the incoherence between the two types of features is encouraged to reduce information redundancy and exploit discriminative information in them separately. The discriminative power of the learned features is further improved by encouraging features in the same categories to be geometrically closer. Robust view-invariant features are finally learned by stacking several layers of features. Experimental results on three multi-view data sets show that our approaches outperform the state-of-the-art approaches.

摘要

由于不同视角下的数据变化巨大,因此从多种视角对人类行为进行分类具有挑战性。解决此问题的关键是学习对视图变化具有鲁棒性的判别视图不变特征。在本文中,我们通过使用新颖的深度模型来学习特定于视图和共享视图的特征来解决此问题。特定于视图的特征捕获每个视图的独特动态,而共享视图的特征则在视图之间编码通用模式。在学习共享特征时引入了新颖的样本亲和度矩阵,该矩阵可以准确地平衡来自多个视图的样本内的信息传递,并限制样本间的传递。这使我们能够学习更具判别力的共享特征,从而对视图变化具有鲁棒性。此外,鼓励两种类型的特征之间的不协调性,以减少信息冗余并分别利用其中的判别信息。通过鼓励同一类别中的特征在几何上更接近,进一步提高了学习到的特征的判别力。最后,通过堆叠多个特征层来学习鲁棒的视图不变特征。在三个多视图数据集上的实验结果表明,我们的方法优于最先进的方法。

相似文献

1
Deeply Learned View-Invariant Features for Cross-View Action Recognition.深度学习的视图不变特征用于跨视图动作识别。
IEEE Trans Image Process. 2017 Jun;26(6):3028-3037. doi: 10.1109/TIP.2017.2696786. Epub 2017 Apr 24.
2
Learning a Deep Model for Human Action Recognition from Novel Viewpoints.从新视角学习人类动作识别的深度模型。
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):667-681. doi: 10.1109/TPAMI.2017.2691768. Epub 2017 Apr 6.
3
Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.用于多视角和视角不变面部表情识别的判别共享高斯过程。
IEEE Trans Image Process. 2015 Jan;24(1):189-204. doi: 10.1109/TIP.2014.2375634. Epub 2014 Nov 26.
4
Learning Spatio-Temporal Representations for Action Recognition: A Genetic Programming Approach.学习时空表示进行动作识别:一种遗传编程方法。
IEEE Trans Cybern. 2016 Jan;46(1):158-70. doi: 10.1109/TCYB.2015.2399172. Epub 2015 Feb 13.
5
Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis.基于时空方向分析的动作定位与识别。
IEEE Trans Pattern Anal Mach Intell. 2013 Mar;35(3):527-40. doi: 10.1109/TPAMI.2012.141.
6
Submodular Attribute Selection for Visual Recognition.用于视觉识别的次模属性选择。
IEEE Trans Pattern Anal Mach Intell. 2017 Nov;39(11):2242-2255. doi: 10.1109/TPAMI.2016.2636827. Epub 2016 Dec 7.
7
Cross-domain human action recognition.跨域人类动作识别
IEEE Trans Syst Man Cybern B Cybern. 2012 Apr;42(2):298-307. doi: 10.1109/TSMCB.2011.2166761. Epub 2011 Sep 26.
8
Multipe/single-view human action recognition via part-induced multitask structural learning.基于部件诱导多任务结构学习的多/单视图人体动作识别。
IEEE Trans Cybern. 2015 Jun;45(6):1194-208. doi: 10.1109/TCYB.2014.2347057. Epub 2014 Aug 27.
9
Multi-view human activity recognition in distributed camera sensor networks.分布式摄像机传感器网络中的多视角人体活动识别。
Sensors (Basel). 2013 Jul 8;13(7):8750-70. doi: 10.3390/s130708750.
10
Explicit modeling of human-object interactions in realistic videos.真实视频中人类-物体交互的显式建模。
IEEE Trans Pattern Anal Mach Intell. 2013 Apr;35(4):835-48. doi: 10.1109/TPAMI.2012.175.

引用本文的文献

1
Methods, Databases and Recent Advancement of Vision-Based Hand Gesture Recognition for HCI Systems: A Review.用于人机交互系统的基于视觉的手势识别方法、数据库及最新进展:综述
SN Comput Sci. 2021;2(6):436. doi: 10.1007/s42979-021-00827-x. Epub 2021 Aug 29.
2
A Hierarchical View Pooling Network for Multichannel Surface Electromyography-Based Gesture Recognition.基于多通道表面肌电信号的手势识别的分层视图池网络。
Comput Intell Neurosci. 2021 Aug 26;2021:6591035. doi: 10.1155/2021/6591035. eCollection 2021.
3
Designing a Computer-Vision Application: A Case Study for Hand-Hygiene Assessment in an Open-Room Environment.
设计计算机视觉应用程序:开放房间环境中手部卫生评估的案例研究
J Imaging. 2021 Aug 30;7(9):170. doi: 10.3390/jimaging7090170.
4
Multiview Layer Fusion Model for Action Recognition Using RGBD Images.基于 RGBD 图像的动作识别的多视图层融合模型。
Comput Intell Neurosci. 2018 Jun 20;2018:9032945. doi: 10.1155/2018/9032945. eCollection 2018.