• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多主角度判别学习在视觉识别中的应用。

Discriminant learning through multiple principal angles for visual recognition.

机构信息

Department of Electronic Engineering, Tsinghua University, Beijing, China.

出版信息

IEEE Trans Image Process. 2012 Mar;21(3):1381-90. doi: 10.1109/TIP.2011.2169972. Epub 2011 Sep 29.

DOI:10.1109/TIP.2011.2169972
PMID:21965205
Abstract

Canonical correlation has been prevalent for multiset-based pairwise subspace analysis. As an extension, discriminant canonical correlations (DCCs) have been developed for classification purpose by learning a global subspace based on Fisher discriminant modeling of pairwise subspaces. However, the discriminative power of DCCs is not optimal as it only measures the "local" canonical correlations within subspace pairs, which lacks the "global" measurement among all the subspaces. In this paper, we propose a multiset discriminant canonical correlation method, i.e., multiple principal angle (MPA). It jointly considers both "local" and "global" canonical correlations by iteratively learning multiple subspaces (one for each set) as well as a global discriminative subspace, on which the angle among multiple subspaces of the same class is minimized while that of different classes is maximized. The proposed computational solution is guaranteed to be convergent with much faster converging speed than DCC. Extensive experiments on pattern recognition applications demonstrate the superior performance of MPA compared to existing subspace learning methods.

摘要

典范相关在多集的基于对子空间分析中非常流行。作为扩展,判别典范相关(DCC)通过基于对子空间的 Fisher 判别建模学习全局子空间,已经被开发用于分类目的。然而,DCC 的判别能力不是最优的,因为它仅测量子空间对中的“局部”典范相关,而缺乏所有子空间之间的“全局”测量。在本文中,我们提出了一种多集判别典范相关方法,即多主角度(MPA)。它通过迭代地学习多个子空间(每个集一个)以及一个全局判别子空间,同时最小化同一类的多个子空间之间的角度,最大化不同类之间的角度,共同考虑“局部”和“全局”典范相关。所提出的计算解决方案是有保证的收敛,与 DCC 相比,收敛速度要快得多。在模式识别应用程序上的广泛实验表明,MPA 与现有的子空间学习方法相比具有更好的性能。

相似文献

1
Discriminant learning through multiple principal angles for visual recognition.多主角度判别学习在视觉识别中的应用。
IEEE Trans Image Process. 2012 Mar;21(3):1381-90. doi: 10.1109/TIP.2011.2169972. Epub 2011 Sep 29.
2
Discriminative learning and recognition of image set classes using canonical correlations.使用典型相关性对图像集类别进行判别式学习与识别。
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):1005-18. doi: 10.1109/TPAMI.2007.1037.
3
On-line learning of mutually orthogonal subspaces for face recognition by image sets.基于图像集的人脸识别相互正交子空间的在线学习。
IEEE Trans Image Process. 2010 Apr;19(4):1067-74. doi: 10.1109/TIP.2009.2038621. Epub 2009 Dec 15.
4
Image classification using correlation tensor analysis.使用相关张量分析的图像分类
IEEE Trans Image Process. 2008 Feb;17(2):226-34. doi: 10.1109/TIP.2007.914203.
5
Subspaces indexing model on Grassmann manifold for image search.Grassmann 流形上的子空间索引模型用于图像搜索。
IEEE Trans Image Process. 2011 Sep;20(9):2627-35. doi: 10.1109/TIP.2011.2114354. Epub 2011 Feb 14.
6
Sparse tensor discriminant analysis.稀疏张量判别分析。
IEEE Trans Image Process. 2013 Oct;22(10):3904-15. doi: 10.1109/TIP.2013.2264678. Epub 2013 May 22.
7
Discriminant subspace analysis: a Fukunaga-Koontz approach.判别子空间分析:一种福永-孔茨方法。
IEEE Trans Pattern Anal Mach Intell. 2007 Oct;29(10):1732-45. doi: 10.1109/TPAMI.2007.1089.
8
Approximate nearest subspace search.近似最近子空间搜索。
IEEE Trans Pattern Anal Mach Intell. 2011 Feb;33(2):266-78. doi: 10.1109/TPAMI.2010.110.
9
Multibody grouping by inference of multiple subspaces from high-dimensional data using oriented-frames.使用定向框架从高维数据推断多个子空间进行多体分组。
IEEE Trans Pattern Anal Mach Intell. 2006 Jan;28(1):91-105. doi: 10.1109/TPAMI.2006.16.
10
Boosting random subspace method.增强随机子空间法
Neural Netw. 2008 Nov;21(9):1344-62. doi: 10.1016/j.neunet.2007.12.046. Epub 2008 Jan 6.