Jia Wenqi, Hu Yanzhi, Wang Zimeng, Song Kai, Huang Boyan
College of Electrical Engineering and Information, Northeast Agricultural University, Harbin 150030, China.
Animals (Basel). 2025 Jul 5;15(13):1984. doi: 10.3390/ani15131984.
This study introduces an innovative dog emotion classification system that integrates four non-invasive physiological indicators-skin potential (SP), muscle potential (MP), respiration frequency (RF), and voice pattern (VP)-with the extreme gradient boosting (XGBoost) algorithm. A four-breed dataset was meticulously constructed by recording and labeling physiological signals from dogs exposed to four fundamental emotional states: happiness, sadness, fear, and anger. Comprehensive feature extraction (time-domain, frequency-domain, nonlinearity) was conducted for each signal modality, and inter-emotional variance was analyzed to establish discriminative patterns. Four machine learning algorithms-Neural Networks (NN), Support Vector Machines (SVM), Gradient Boosting Decision Trees (GBDT), and XGBoost-were trained and evaluated, with XGBoost achieving the highest classification accuracy of 90.54%. Notably, this is the first study to integrate a fusion of two complementary electrophysiological indicators-skin and muscle potentials-into a multi-modal dataset for canine emotion recognition. Further interpretability analysis using Shapley Additive exPlanations (SHAP) revealed skin potential and voice pattern features as the most contributive to model performance. The proposed system demonstrates high accuracy, efficiency, and portability, laying a robust groundwork for future advancements in cross-species affective computing and intelligent animal welfare technologies.
本研究介绍了一种创新的犬类情绪分类系统,该系统将皮肤电位(SP)、肌肉电位(MP)、呼吸频率(RF)和语音模式(VP)这四个非侵入性生理指标与极端梯度提升(XGBoost)算法相结合。通过记录和标记处于快乐、悲伤、恐惧和愤怒这四种基本情绪状态的犬只的生理信号,精心构建了一个包含四个犬种的数据集。对每个信号模态进行了全面的特征提取(时域、频域、非线性),并分析了情绪间的差异以建立判别模式。对四种机器学习算法——神经网络(NN)、支持向量机(SVM)、梯度提升决策树(GBDT)和XGBoost——进行了训练和评估,其中XGBoost达到了最高分类准确率90.54%。值得注意的是,这是第一项将皮肤和肌肉电位这两种互补的电生理指标融合到多模态数据集中用于犬类情绪识别的研究。使用Shapley值加法解释(SHAP)进行的进一步可解释性分析表明,皮肤电位和语音模式特征对模型性能贡献最大。所提出的系统展示了高精度、高效率和便携性,为跨物种情感计算和智能动物福利技术的未来发展奠定了坚实基础。