• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于高阶正交迭代分解的多模态融合方法。

A Multi-Modal Fusion Method Based on Higher-Order Orthogonal Iteration Decomposition.

作者信息

Liu Fen, Chen Jianfeng, Tan Weijie, Cai Chang

机构信息

School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an 710072, China.

College of Mathematics and Computer Science, Yan'an University, Yan'an 716000, China.

出版信息

Entropy (Basel). 2021 Oct 15;23(10):1349. doi: 10.3390/e23101349.

DOI:10.3390/e23101349
PMID:34682073
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8534596/
Abstract

Multi-modal fusion can achieve better predictions through the amalgamation of information from different modalities. To improve the performance of accuracy, a method based on Higher-order Orthogonal Iteration Decomposition and Projection (HOIDP) is proposed, in the fusion process, higher-order orthogonal iteration decomposition algorithm and factor matrix projection are used to remove redundant information duplicated inter-modal and produce fewer parameters with minimal information loss. The performance of the proposed method is verified by three different multi-modal datasets. The numerical results validate the accuracy of the performance of the proposed method having 0.4% to 4% improvement in sentiment analysis, 0.3% to 8% improvement in personality trait recognition, and 0.2% to 25% improvement in emotion recognition at three different multi-modal datasets compared with other 5 methods.

摘要

多模态融合可以通过整合来自不同模态的信息来实现更好的预测。为了提高准确率,提出了一种基于高阶正交迭代分解与投影(HOIDP)的方法,在融合过程中,使用高阶正交迭代分解算法和因子矩阵投影来去除跨模态重复的冗余信息,并以最小的信息损失产生更少的参数。通过三个不同的多模态数据集验证了所提方法的性能。数值结果验证了所提方法性能的准确性,与其他5种方法相比,在三个不同的多模态数据集上,情感分析的性能提高了0.4%至4%,人格特质识别提高了0.3%至8%,情感识别提高了0.2%至25%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/12d7c7af83b7/entropy-23-01349-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/a8a08fc99389/entropy-23-01349-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/355f5dedbf45/entropy-23-01349-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/f41d715d91c4/entropy-23-01349-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/3ffcd4261fd2/entropy-23-01349-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/12d7c7af83b7/entropy-23-01349-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/a8a08fc99389/entropy-23-01349-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/355f5dedbf45/entropy-23-01349-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/f41d715d91c4/entropy-23-01349-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/3ffcd4261fd2/entropy-23-01349-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1ca2/8534596/12d7c7af83b7/entropy-23-01349-g005.jpg

相似文献

1
A Multi-Modal Fusion Method Based on Higher-Order Orthogonal Iteration Decomposition.一种基于高阶正交迭代分解的多模态融合方法。
Entropy (Basel). 2021 Oct 15;23(10):1349. doi: 10.3390/e23101349.
2
A Parallel Multi-Modal Factorized Bilinear Pooling Fusion Method Based on the Semi-Tensor Product for Emotion Recognition.一种基于半张量积的并行多模态因子分解双线性池化融合情感识别方法。
Entropy (Basel). 2022 Dec 16;24(12):1836. doi: 10.3390/e24121836.
3
Multi-Modal Image Fusion Based on Matrix Product State of Tensor.基于张量矩阵乘积态的多模态图像融合
Front Neurorobot. 2021 Nov 15;15:762252. doi: 10.3389/fnbot.2021.762252. eCollection 2021.
4
Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning.基于深度学习的语音表达多模态融合情感识别方法
Front Neurorobot. 2021 Jul 9;15:697634. doi: 10.3389/fnbot.2021.697634. eCollection 2021.
5
Cross-Modal Sentiment Sensing with Visual-Augmented Representation and Diverse Decision Fusion.跨模态情感感知与视觉增强表示和多样化决策融合。
Sensors (Basel). 2021 Dec 23;22(1):74. doi: 10.3390/s22010074.
6
Multi-Modal Medical Image Fusion With Geometric Algebra Based Sparse Representation.基于几何代数稀疏表示的多模态医学图像融合
Front Genet. 2022 Jun 23;13:927222. doi: 10.3389/fgene.2022.927222. eCollection 2022.
7
CMBF: Cross-Modal-Based Fusion Recommendation Algorithm.CMBF:基于跨模态融合的推荐算法。
Sensors (Basel). 2021 Aug 4;21(16):5275. doi: 10.3390/s21165275.
8
Hybrid EEG-fNIRS BCI Fusion Using Multi-Resolution Singular Value Decomposition (MSVD).使用多分辨率奇异值分解(MSVD)的混合脑电图-功能近红外光谱脑机接口融合
Front Hum Neurosci. 2020 Dec 8;14:599802. doi: 10.3389/fnhum.2020.599802. eCollection 2020.
9
Application of the Recognition Algorithm of News Sentiment Dissemination Tendency Based on Multi-Mode Information Fusion.基于多模态信息融合的新闻情感传播倾向识别算法的应用
Front Psychol. 2022 May 31;13:853899. doi: 10.3389/fpsyg.2022.853899. eCollection 2022.
10
Decomposition-Based Correlation Learning for Multi-Modal MRI-Based Classification of Neuropsychiatric Disorders.基于分解的相关学习用于基于多模态磁共振成像的神经精神疾病分类
Front Neurosci. 2022 May 25;16:832276. doi: 10.3389/fnins.2022.832276. eCollection 2022.

引用本文的文献

1
A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face.基于深度学习的多模态情感识别综述:语音、文本和面部
Entropy (Basel). 2023 Oct 12;25(10):1440. doi: 10.3390/e25101440.
2
A Parallel Multi-Modal Factorized Bilinear Pooling Fusion Method Based on the Semi-Tensor Product for Emotion Recognition.一种基于半张量积的并行多模态因子分解双线性池化融合情感识别方法。
Entropy (Basel). 2022 Dec 16;24(12):1836. doi: 10.3390/e24121836.

本文引用的文献

1
Multi-attention Recurrent Network for Human Communication Comprehension.用于人类交流理解的多注意力循环网络。
Proc AAAI Conf Artif Intell. 2018 Feb;2018:5642-5649.
2
Multimodal Machine Learning: A Survey and Taxonomy.多模态机器学习:一项综述与分类法
IEEE Trans Pattern Anal Mach Intell. 2019 Feb;41(2):423-443. doi: 10.1109/TPAMI.2018.2798607. Epub 2018 Jan 25.
3
Video2vec Embeddings Recognize Events When Examples Are Scarce.Video2vec 嵌入识别在例子稀缺时的事件。
IEEE Trans Pattern Anal Mach Intell. 2017 Oct;39(10):2089-2103. doi: 10.1109/TPAMI.2016.2627563. Epub 2016 Nov 10.
4
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.