Liu Fen, Chen Jianfeng, Tan Weijie, Cai Chang
School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an 710072, China.
College of Mathematics and Computer Science, Yan'an University, Yan'an 716000, China.
Entropy (Basel). 2021 Oct 15;23(10):1349. doi: 10.3390/e23101349.
Multi-modal fusion can achieve better predictions through the amalgamation of information from different modalities. To improve the performance of accuracy, a method based on Higher-order Orthogonal Iteration Decomposition and Projection (HOIDP) is proposed, in the fusion process, higher-order orthogonal iteration decomposition algorithm and factor matrix projection are used to remove redundant information duplicated inter-modal and produce fewer parameters with minimal information loss. The performance of the proposed method is verified by three different multi-modal datasets. The numerical results validate the accuracy of the performance of the proposed method having 0.4% to 4% improvement in sentiment analysis, 0.3% to 8% improvement in personality trait recognition, and 0.2% to 25% improvement in emotion recognition at three different multi-modal datasets compared with other 5 methods.
多模态融合可以通过整合来自不同模态的信息来实现更好的预测。为了提高准确率,提出了一种基于高阶正交迭代分解与投影(HOIDP)的方法,在融合过程中,使用高阶正交迭代分解算法和因子矩阵投影来去除跨模态重复的冗余信息,并以最小的信息损失产生更少的参数。通过三个不同的多模态数据集验证了所提方法的性能。数值结果验证了所提方法性能的准确性,与其他5种方法相比,在三个不同的多模态数据集上,情感分析的性能提高了0.4%至4%,人格特质识别提高了0.3%至8%,情感识别提高了0.2%至25%。