• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用基于注意力的变压器模型从手写和绘图样本中进行情绪检测。

Emotion detection from handwriting and drawing samples using an attention-based transformer model.

作者信息

Khan Zohaib Ahmad, Xia Yuanqing, Aurangzeb Khursheed, Khaliq Fiza, Alam Mahmood, Khan Javed Ali, Anwar Muhammad Shahid

机构信息

School of Automation, Beijing Institute of Technology, Beijing, China.

Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.

出版信息

PeerJ Comput Sci. 2024 Mar 29;10:e1887. doi: 10.7717/peerj-cs.1887. eCollection 2024.

DOI:10.7717/peerj-cs.1887
PMID:38660197
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11041987/
Abstract

Emotion detection (ED) involves the identification and understanding of an individual's emotional state through various cues such as facial expressions, voice tones, physiological changes, and behavioral patterns. In this context, behavioral analysis is employed to observe actions and behaviors for emotional interpretation. This work specifically employs behavioral metrics like drawing and handwriting to determine a person's emotional state, recognizing these actions as physical functions integrating motor and cognitive processes. The study proposes an attention-based transformer model as an innovative approach to identify emotions from handwriting and drawing samples, thereby advancing the capabilities of ED into the domains of fine motor skills and artistic expression. The initial data obtained provides a set of points that correspond to the handwriting or drawing strokes. Each stroke point is subsequently delivered to the attention-based transformer model, which embeds it into a high-dimensional vector space. The model builds a prediction about the emotional state of the person who generated the sample by integrating the most important components and patterns in the input sequence using self-attentional processes. The proposed approach possesses a distinct advantage in its enhanced capacity to capture long-range correlations compared to conventional recurrent neural networks (RNN). This characteristic makes it particularly well-suited for the precise identification of emotions from samples of handwriting and drawings, signifying a notable advancement in the field of emotion detection. The proposed method produced cutting-edge outcomes of 92.64% on the benchmark dataset known as EMOTHAW (Emotion Recognition Handwriting and Drawing).

摘要

情绪检测(ED)涉及通过面部表情、语音语调、生理变化和行为模式等各种线索来识别和理解个体的情绪状态。在此背景下,行为分析被用于观察行为和动作以进行情绪解读。这项工作特别采用绘画和笔迹等行为指标来确定一个人的情绪状态,将这些行为视为整合运动和认知过程的身体功能。该研究提出了一种基于注意力的变压器模型,作为从笔迹和绘画样本中识别情绪的创新方法,从而将情绪检测的能力扩展到精细运动技能和艺术表达领域。获得的初始数据提供了一组与笔迹或绘画笔触相对应的点。随后,每个笔触点被输入到基于注意力的变压器模型中,该模型将其嵌入到高维向量空间中。该模型通过使用自注意力过程整合输入序列中最重要的成分和模式,对生成样本的人的情绪状态进行预测。与传统的循环神经网络(RNN)相比,所提出的方法在捕捉长距离相关性方面具有明显优势。这一特性使其特别适合从笔迹和绘画样本中精确识别情绪,标志着情绪检测领域的显著进步。所提出的方法在名为EMOTHAW(情绪识别 笔迹和绘画)的基准数据集上产生了92.64%的前沿成果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/139d/11041987/8e3c4b0929e0/peerj-cs-10-1887-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/139d/11041987/2e1f4c17ff9f/peerj-cs-10-1887-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/139d/11041987/8e3c4b0929e0/peerj-cs-10-1887-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/139d/11041987/2e1f4c17ff9f/peerj-cs-10-1887-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/139d/11041987/8e3c4b0929e0/peerj-cs-10-1887-g002.jpg

相似文献

1
Emotion detection from handwriting and drawing samples using an attention-based transformer model.使用基于注意力的变压器模型从手写和绘图样本中进行情绪检测。
PeerJ Comput Sci. 2024 Mar 29;10:e1887. doi: 10.7717/peerj-cs.1887. eCollection 2024.
2
Objectively Quantifying Pediatric Psychiatric Severity Using Artificial Intelligence, Voice Recognition Technology, and Universal Emotions: Pilot Study for Artificial Intelligence-Enabled Innovation to Address Youth Mental Health Crisis.利用人工智能、语音识别技术和通用情感客观量化儿科精神疾病严重程度:基于人工智能的创新解决青少年心理健康危机的试点研究
JMIR Res Protoc. 2023 Oct 23;12:e51912. doi: 10.2196/51912.
3
Influence of age and movement complexity on kinematic hand movement parameters in childhood and adolescence.年龄和运动复杂性对儿童及青少年手部运动学参数的影响。
Int J Dev Neurosci. 2008 Nov;26(7):655-63. doi: 10.1016/j.ijdevneu.2008.07.015. Epub 2008 Aug 5.
4
BAT: Block and token self-attention for speech emotion recognition.BAT:用于语音情感识别的块和令牌自注意力。
Neural Netw. 2022 Dec;156:67-80. doi: 10.1016/j.neunet.2022.09.022. Epub 2022 Sep 29.
5
Designing Visual-Arts Education Programs for Transfer Effects: Development and Experimental Evaluation of (Digital) Drawing Courses in the Art Museum Designed to Promote Adolescents' Socio-Emotional Skills.设计具有迁移效应的视觉艺术教育项目:旨在提升青少年社会情感技能的艺术博物馆(数字)绘画课程的开发与实验评估
Front Psychol. 2021 Jan 18;11:603984. doi: 10.3389/fpsyg.2020.603984. eCollection 2020.
6
A novel transformer autoencoder for multi-modal emotion recognition with incomplete data.一种基于新型Transformer 自编码器的多模态情感识别方法,适用于不完全数据。
Neural Netw. 2024 Apr;172:106111. doi: 10.1016/j.neunet.2024.106111. Epub 2024 Jan 6.
7
The role of linguistic and cognitive factors in emotion recognition difficulties in children with ASD, ADHD or DLD.语言和认知因素在 ASD、ADHD 或 DLD 儿童情绪识别困难中的作用。
Int J Lang Commun Disord. 2020 Mar;55(2):231-242. doi: 10.1111/1460-6984.12514. Epub 2019 Dec 3.
8
Emotion Recognition from Large-Scale Video Clips with Cross-Attention and Hybrid Feature Weighting Neural Networks.基于交叉注意力和混合特征加权神经网络的大规模视频片段中的情感识别。
Int J Environ Res Public Health. 2023 Jan 12;20(2):1400. doi: 10.3390/ijerph20021400.
9
ERTNet: an interpretable transformer-based framework for EEG emotion recognition.ERTNet:一种基于可解释Transformer的脑电情感识别框架。
Front Neurosci. 2024 Jan 17;18:1320645. doi: 10.3389/fnins.2024.1320645. eCollection 2024.
10
Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer.基于卷积神经网络和多头卷积变换的语音情感识别。
Sensors (Basel). 2023 Jul 7;23(13):6212. doi: 10.3390/s23136212.

引用本文的文献

1
Mining software insights: uncovering the frequently occurring issues in low-rating software applications.挖掘软件见解:揭示低评分软件应用中频繁出现的问题。
PeerJ Comput Sci. 2024 Jul 10;10:e2115. doi: 10.7717/peerj-cs.2115. eCollection 2024.

本文引用的文献

1
Emotion recognition of social media users based on deep learning.基于深度学习的社交媒体用户情绪识别
PeerJ Comput Sci. 2023 Jun 14;9:e1414. doi: 10.7717/peerj-cs.1414. eCollection 2023.
2
Classification of mild cognitive impairment based on handwriting dynamics and qEEG.基于笔迹动力学和定量脑电图的轻度认知障碍分类
Comput Biol Med. 2023 Jan;152:106418. doi: 10.1016/j.compbiomed.2022.106418. Epub 2022 Dec 12.
3
Feature selection enhancement and feature space visualization for speech-based emotion recognition.基于语音的情感识别的特征选择增强与特征空间可视化
PeerJ Comput Sci. 2022 Nov 4;8:e1091. doi: 10.7717/peerj-cs.1091. eCollection 2022.
4
An Urdu speech for emotion recognition.一段用于情感识别的乌尔都语语音。
PeerJ Comput Sci. 2022 May 9;8:e954. doi: 10.7717/peerj-cs.954. eCollection 2022.
5
Automatic emotion recognition in healthcare data using supervised machine learning.使用监督式机器学习对医疗保健数据进行自动情感识别。
PeerJ Comput Sci. 2021 Dec 15;7:e751. doi: 10.7717/peerj-cs.751. eCollection 2021.
6
COVID-19 classification by CCSHNet with deep fusion using transfer learning and discriminant correlation analysis.使用迁移学习和判别相关分析进行深度融合的CCSHNet对COVID-19的分类
Inf Fusion. 2021 Apr;68:131-148. doi: 10.1016/j.inffus.2020.11.005. Epub 2020 Nov 13.
7
Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation.神经影像学中多模态数据融合的进展:概述、挑战及新方向。
Inf Fusion. 2020 Dec;64:149-187. doi: 10.1016/j.inffus.2020.07.006. Epub 2020 Jul 17.
8
A novel approach combining temporal and spectral features of Arabic online handwriting for Parkinson's disease prediction.一种结合阿拉伯语在线手写的时间和频谱特征用于帕金森病预测的新方法。
J Neurosci Methods. 2020 Jun 1;339:108727. doi: 10.1016/j.jneumeth.2020.108727. Epub 2020 Apr 13.