Suppr超能文献

鉴别性多重典型相关分析在信息融合中的应用。

Discriminative Multiple Canonical Correlation Analysis for Information Fusion.

出版信息

IEEE Trans Image Process. 2018 Apr;27(4):1951-1965. doi: 10.1109/TIP.2017.2765820. Epub 2017 Oct 23.

Abstract

In this paper, we propose the discriminative multiple canonical correlation analysis (DMCCA) for multimodal information analysis and fusion. DMCCA is capable of extracting more discriminative characteristics from multimodal information representations. Specifically, it finds the projected directions, which simultaneously maximize the within-class correlation and minimize the between-class correlation, leading to better utilization of the multimodal information. In the process, we analytically demonstrate that the optimally projected dimension by DMCCA can be quite accurately predicted, leading to both superior performance and substantial reduction in computational cost. We further verify that canonical correlation analysis (CCA), multiple canonical correlation analysis (MCCA) and discriminative canonical correlation analysis (DCCA) are special cases of DMCCA, thus establishing a unified framework for canonical correlation analysis. We implement a prototype of DMCCA to demonstrate its performance in handwritten digit recognition and human emotion recognition. Extensive experiments show that DMCCA outperforms the traditional methods of serial fusion, CCA, MCCA, and DCCA.

摘要

在本文中,我们提出了判别式多典型相关分析(DMCCA)用于多模态信息分析和融合。DMCCA 能够从多模态信息表示中提取更具判别性的特征。具体来说,它找到了投影方向,这些方向同时最大化类内相关性并最小化类间相关性,从而更好地利用多模态信息。在这个过程中,我们从理论上证明了 DMCCA 最优投影维度可以被相当准确地预测,从而实现了卓越的性能和计算成本的大幅降低。我们进一步验证了典型相关分析(CCA)、多典型相关分析(MCCA)和判别式典型相关分析(DCCA)都是 DMCCA 的特例,从而为典型相关分析建立了一个统一的框架。我们实现了 DMCCA 的原型,以演示其在手写数字识别和人类情感识别中的性能。广泛的实验表明,DMCCA 优于传统的串行融合方法、CCA、MCCA 和 DCCA。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验