Suppr超能文献

联合学习 RGB-D 活动识别中的异构特征。

Jointly Learning Heterogeneous Features for RGB-D Activity Recognition.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2017 Nov;39(11):2186-2200. doi: 10.1109/TPAMI.2016.2640292. Epub 2016 Dec 15.

Abstract

In this paper, we focus on heterogeneous features learning for RGB-D activity recognition. We find that features from different channels (RGB, depth) could share some similar hidden structures, and then propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogeneous multi-task learning. The proposed model formed in a unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to exploit latent shared features across different feature channels, 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces, and 3) transferring feature-specific intermediate transforms (i-transforms) for learning fusion of heterogeneous features across datasets. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by a simple inference model. Extensive experimental results on four activity datasets have demonstrated the efficacy of the proposed method. A new RGB-D activity dataset focusing on human-object interaction is further contributed, which presents more challenges for RGB-D activity benchmarking.

摘要

在本文中,我们专注于 RGB-D 活动识别中的异构特征学习。我们发现不同通道(RGB、深度)的特征可能共享一些相似的隐藏结构,然后提出了一个联合学习模型,以同时探索共享和特征特定的组件,作为异构多任务学习的一个实例。所提出的模型在一个统一的框架中形成,能够:1)联合挖掘一组具有相同维度的子空间,以利用不同特征通道之间潜在的共享特征;2)同时量化子空间中特征的共享和特征特定组件;3)转移特征特定的中间变换(i 变换),以学习跨数据集的异构特征融合。为了有效地训练联合模型,提出了一个三步迭代优化算法,然后是一个简单的推理模型。在四个活动数据集上的广泛实验结果证明了所提出方法的有效性。还进一步贡献了一个新的专注于人机交互的 RGB-D 活动数据集,这为 RGB-D 活动基准测试带来了更多挑战。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验