Department of Embedded Network Systems Technology, Faculty of Artificial Intelligence, Kafrelsheikh University, Kafr El-Sheikh, Egypt.
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 84428, Saudi Arabia.
Comput Intell Neurosci. 2022 Feb 2;2022:8032673. doi: 10.1155/2022/8032673. eCollection 2022.
Emotion recognition is one of the trending research fields. It is involved in several applications. Its most interesting applications include robotic vision and interactive robotic communication. Human emotions can be detected using both speech and visual modalities. Facial expressions can be considered as ideal means for detecting the persons' emotions. This paper presents a real-time approach for implementing emotion detection and deploying it in the robotic vision applications. The proposed approach consists of four phases: preprocessing, key point generation, key point selection and angular encoding, and classification. The main idea is to generate key points using MediaPipe face mesh algorithm, which is based on real-time deep learning. In addition, the generated key points are encoded using a sequence of carefully designed mesh generator and angular encoding modules. Furthermore, feature decomposition is performed using Principal Component Analysis (PCA). This phase is deployed to enhance the accuracy of emotion detection. Finally, the decomposed features are enrolled into a Machine Learning (ML) technique that depends on a Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naïve Bayes (NB), Logistic Regression (LR), or Random Forest (RF) classifier. Moreover, we deploy a Multilayer Perceptron (MLP) as an efficient deep neural network technique. The presented techniques are evaluated on different datasets with different evaluation metrics. The simulation results reveal that they achieve a superior performance with a human emotion detection accuracy of 97%, which ensures superiority among the efforts in this field.
情感识别是当前热门的研究领域之一。它涉及到多个应用。最有趣的应用包括机器人视觉和交互式机器人通信。人类的情感可以通过语音和视觉两种模态来检测。面部表情可以被视为检测人类情感的理想手段。本文提出了一种实时的情感检测方法,并将其应用于机器人视觉应用中。所提出的方法包括四个阶段:预处理、关键点生成、关键点选择和角度编码以及分类。主要思想是使用基于实时深度学习的 MediaPipe 面部网格算法生成关键点。此外,生成的关键点使用精心设计的网格生成器和角度编码模块序列进行编码。此外,使用主成分分析(PCA)进行特征分解。该阶段用于提高情感检测的准确性。最后,将分解后的特征注册到基于支持向量机(SVM)、k-最近邻(KNN)、朴素贝叶斯(NB)、逻辑回归(LR)或随机森林(RF)分类器的机器学习(ML)技术中。此外,我们还部署了多层感知机(MLP)作为一种有效的深度神经网络技术。所提出的技术在不同的数据集上进行了评估,并采用了不同的评估指标。仿真结果表明,它们在人类情感检测准确率达到 97%的情况下表现出了优异的性能,这确保了它们在该领域的领先地位。