Suppr超能文献

基于多模态特征融合的超图学习模型。

Multimodal Feature Fusion Based Hypergraph Learning Model.

机构信息

School of Computer Science & Technology, Soochow University, Suzhou, China.

Provincial Key Laboratory for Computer Information Processing Technology, Suzhou, China.

出版信息

Comput Intell Neurosci. 2022 May 16;2022:9073652. doi: 10.1155/2022/9073652. eCollection 2022.

Abstract

Hypergraph learning is a new research hotspot in the machine learning field. The performance of the hypergraph learning model depends on the quality of the hypergraph structure built by different feature extraction methods as well as its incidence matrix. However, the existing models are all hypergraph structures built based on one feature extraction method, with limited feature extraction and abstract expression ability. This paper proposed a multimodal feature fusion method, which firstly built a single modal hypergraph structure based on different feature extraction methods, and then extended the hypergraph incidence matrix and weight matrix of different modals. The extended matrices fuse the multimodal abstract feature and an expanded Markov random walk range during model learning, with stronger feature expression ability. However, the extended multimodal incidence matrix has a high scale and high computational cost. Therefore, the Laplacian matrix fusion method was proposed, which performed Laplacian matrix transformation on the incidence matrix and weight matrix of every model, respectively, and then conducted a weighted superposition on these Laplacian matrices for subsequent model training. The tests on four different types of datasets indicate that the hypergraph learning model obtained after multimodal feature fusion has a better classification performance than the single modal model. After Laplace matrix fusion, the average time can be reduced by about 40% compared with the extended incidence matrix, the classification performance can be further improved, and the index F1 can be improved by 8.4%.

摘要

超图学习是机器学习领域的一个新的研究热点。超图学习模型的性能取决于不同特征提取方法构建的超图结构及其关联矩阵的质量。然而,现有的模型都是基于一种特征提取方法构建的超图结构,特征提取和抽象表达能力有限。本文提出了一种多模态特征融合方法,该方法首先基于不同的特征提取方法构建单模态超图结构,然后扩展不同模态的超图关联矩阵和权重矩阵。扩展矩阵在模型学习过程中融合了多模态抽象特征和扩展的马尔可夫随机游走范围,具有更强的特征表达能力。然而,扩展的多模态关联矩阵具有较高的规模和计算成本。因此,提出了拉普拉斯矩阵融合方法,该方法分别对每个模型的关联矩阵和权重矩阵进行拉普拉斯矩阵变换,然后对这些拉普拉斯矩阵进行加权叠加,用于后续的模型训练。在四个不同类型的数据集上的测试表明,与单模态模型相比,经过多模态特征融合的超图学习模型具有更好的分类性能。经过拉普拉斯矩阵融合后,与扩展关联矩阵相比,平均时间可以减少约 40%,分类性能可以进一步提高,F1 指标可以提高 8.4%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3dcd/9126688/e4f5625fdbbb/CIN2022-9073652.001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验