• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于视觉识别的次模属性选择。

Submodular Attribute Selection for Visual Recognition.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2017 Nov;39(11):2242-2255. doi: 10.1109/TPAMI.2016.2636827. Epub 2016 Dec 7.

DOI:10.1109/TPAMI.2016.2636827
PMID:28114004
Abstract

In real-world visual recognition problems, low-level features cannot adequately characterize the semantic content in images, or the spatio-temporal structure in videos. In this work, we encode objects or actions based on attributes that describe them as high-level concepts. We consider two types of attributes. One type of attributes is generated by humans, while the second type is data-driven attributes extracted from data using dictionary learning methods. Attribute-based representation may exhibit variations due to noisy and redundant attributes. We propose a discriminative and compact attribute-based representation by selecting a subset of discriminative attributes from a large attribute set. Three attribute selection criteria are proposed and formulated as a submodular optimization problem. A greedy optimization algorithm is presented and its solution is guaranteed to be at least (1-1/e)-approximation to the optimum. Experimental results on four public datasets demonstrate that the proposed attribute-based representation significantly boosts the performance of visual recognition and outperforms most recently proposed recognition approaches.

摘要

在真实世界的视觉识别问题中,底层特征无法充分描述图像中的语义内容或视频中的时空结构。在这项工作中,我们基于描述对象或动作的属性对其进行编码,这些属性将其表示为高级概念。我们考虑了两种类型的属性。一种类型的属性是由人类生成的,而第二种类型的属性是使用字典学习方法从数据中提取的数据驱动属性。基于属性的表示可能会因噪声和冗余属性而发生变化。我们通过从大型属性集中选择一组具有判别力的属性来提出一种具有判别力且紧凑的基于属性的表示。提出了三个属性选择标准,并将其表示为一个次模优化问题。提出了一种贪心优化算法,其解决方案至少是最优解的 (1-1/e) 逼近。在四个公共数据集上的实验结果表明,所提出的基于属性的表示显著提高了视觉识别的性能,并且优于最近提出的识别方法。

相似文献

1
Submodular Attribute Selection for Visual Recognition.用于视觉识别的次模属性选择。
IEEE Trans Pattern Anal Mach Intell. 2017 Nov;39(11):2242-2255. doi: 10.1109/TPAMI.2016.2636827. Epub 2016 Dec 7.
2
Max-margin multiattribute learning with low-rank constraint.基于低秩约束的极大间隔多属性学习。
IEEE Trans Image Process. 2014 Jul;23(7):2866-76. doi: 10.1109/TIP.2014.2322446. Epub 2014 May 7.
3
Learning sparse representations for human action recognition.学习人类动作识别的稀疏表示。
IEEE Trans Pattern Anal Mach Intell. 2012 Aug;34(8):1576-88. doi: 10.1109/TPAMI.2011.253.
4
Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis.基于时空方向分析的动作定位与识别。
IEEE Trans Pattern Anal Mach Intell. 2013 Mar;35(3):527-40. doi: 10.1109/TPAMI.2012.141.
5
A Richly Annotated Pedestrian Dataset for Person Retrieval in Real Surveillance Scenarios.丰富标注的行人数据集,用于真实监控场景下的人员检索。
IEEE Trans Image Process. 2019 Apr;28(4):1575-1590. doi: 10.1109/TIP.2018.2878349. Epub 2018 Oct 26.
6
Deeply Learned View-Invariant Features for Cross-View Action Recognition.深度学习的视图不变特征用于跨视图动作识别。
IEEE Trans Image Process. 2017 Jun;26(6):3028-3037. doi: 10.1109/TIP.2017.2696786. Epub 2017 Apr 24.
7
Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions.发现运动基元,用于人类动作、手势和表情的无监督分组和一次性学习。
IEEE Trans Pattern Anal Mach Intell. 2013 Jul;35(7):1635-48. doi: 10.1109/TPAMI.2012.253.
8
Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) Model for Human Action Recognition.用于人体动作识别的动态时空词袋(D-STBoE)模型。
Sensors (Basel). 2019 Jun 21;19(12):2790. doi: 10.3390/s19122790.
9
Cross-domain human action recognition.跨域人类动作识别
IEEE Trans Syst Man Cybern B Cybern. 2012 Apr;42(2):298-307. doi: 10.1109/TSMCB.2011.2166761. Epub 2011 Sep 26.
10
Evaluation of color spatio-temporal interest points for human action recognition.用于人体动作识别的彩色时空兴趣点评估。
IEEE Trans Image Process. 2014 Apr;23(4):1569-80. doi: 10.1109/TIP.2014.2302677.