Wang Hao, Shi Zhanpeng, Hu Ruijie, Wang Xinyi, Chen Jian, Che Haoyuan
Public Computer Teaching and Research Center, Jilin University, Changchun, 130012, China.
College of Veterinary Medicine, Jilin University, Changchun, 130062, China.
Sci Rep. 2025 Apr 6;15(1):11797. doi: 10.1038/s41598-025-95483-z.
A multimodal emotion recognition method that utilizes facial expressions, body postures, and movement trajectories to detect fear in mice is proposed in this study. By integrating and analyzing these distinct data sources through feature encoders and attention classifiers, we developed a robust emotion classification model. The performance of the model was evaluated by comparing it with single-modal methods, and the results showed significant accuracy improvements. Our findings indicate that the multimodal fusion emotion recognition model enhanced the precision of emotion detection, achieving a fear recognition accuracy of 86.7%. Additionally, the impacts of different monitoring durations and frame sampling rates on the achieved recognition accuracy were investigated in this study. The proposed method provides an efficient and simple solution for conducting real-time, comprehensive emotion monitoring in animal research, with potential applications in neuroscience and psychiatric studies.
本研究提出了一种多模态情感识别方法,该方法利用面部表情、身体姿势和运动轨迹来检测小鼠的恐惧情绪。通过特征编码器和注意力分类器对这些不同的数据源进行整合和分析,我们开发了一个强大的情感分类模型。通过与单模态方法进行比较来评估该模型的性能,结果显示其准确性有显著提高。我们的研究结果表明,多模态融合情感识别模型提高了情感检测的精度,恐惧识别准确率达到了86.7%。此外,本研究还探讨了不同监测持续时间和帧采样率对所达到的识别准确率的影响。所提出的方法为在动物研究中进行实时、全面的情感监测提供了一种高效且简单的解决方案,在神经科学和精神病学研究中具有潜在应用价值。