Zhang Jinlei, Qiu Xue, Li Xiang, Huang Zhijie, Wu Mingqiu, Dong Yumin
College of Computer and Information Science, Chongqing Normal University, Chongqing, China.
Comput Intell Neurosci. 2021 Apr 13;2021:6653659. doi: 10.1155/2021/6653659. eCollection 2021.
Emotion recognition is a research hotspot in the field of artificial intelligence. If the human-computer interaction system can sense human emotion and express emotion, it will make the interaction between the robot and human more natural. In this paper, a multimodal emotion recognition model based on many-objective optimization algorithm is proposed for the first time. The model integrates voice information and facial information and can simultaneously optimize the accuracy and uniformity of recognition. This paper compares the emotion recognition algorithm based on many-objective algorithm optimization with the single-modal emotion recognition model proposed in this paper and the ISMS_ALA model proposed by recent related research. The experimental results show that compared with the single-mode emotion recognition, the proposed model has a great improvement in each evaluation index. At the same time, the accuracy of emotion recognition is 2.88% higher than that of the ISMS_ALA model. The experimental results show that the many-objective optimization algorithm can effectively improve the performance of the multimodal emotion recognition model.
情感识别是人工智能领域的一个研究热点。如果人机交互系统能够感知人类情感并表达情感,将使机器人与人类之间的交互更加自然。本文首次提出了一种基于多目标优化算法的多模态情感识别模型。该模型整合了语音信息和面部信息,能够同时优化识别的准确性和一致性。本文将基于多目标算法优化的情感识别算法与本文提出的单模态情感识别模型以及近期相关研究提出的ISMS_ALA模型进行了比较。实验结果表明,与单模态情感识别相比,所提出的模型在各项评估指标上都有很大的提升。同时,情感识别准确率比ISMS_ALA模型高2.88%。实验结果表明,多目标优化算法能够有效提高多模态情感识别模型的性能。