Suppr超能文献

基于声音和触觉的情绪效价整合建模

Modeling Emotional Valence Integration From Voice and Touch.

作者信息

Tsalamlal Yacine, Amorim Michel-Ange, Martin Jean-Claude, Ammi Mehdi

机构信息

LIMSI, CNRS, Univ. Paris-Sud, Université Paris-Saclay, Orsay, France.

CIAMS, Univ. Paris-Sud, Université Paris-Saclay, Orsay, France.

出版信息

Front Psychol. 2018 Oct 12;9:1966. doi: 10.3389/fpsyg.2018.01966. eCollection 2018.

Abstract

In the context of designing multimodal social interactions for Human-Computer Interaction and for Computer-Mediated Communication, we conducted an experimental study to investigate how participants combine voice expressions with tactile stimulation to evaluate emotional valence (EV). In this study, audio and tactile stimuli were presented separately, and then presented together. Audio stimuli comprised positive and negative voice expressions, and tactile stimuli consisted of different levels of air jet tactile stimulation performed on the arm of the participants. Participants were asked to evaluate communicated EV on a continuous scale. Information Integration Theory was used to model multimodal valence perception process. Analyses showed that participants generally integrated both sources of information to evaluate EV. The main integration rule was averaging rule. The predominance of a modality over the other modality was specific to each individual.

摘要

在为人类计算机交互和计算机介导通信设计多模态社交互动的背景下,我们进行了一项实验研究,以调查参与者如何将语音表达与触觉刺激相结合来评估情感效价(EV)。在本研究中,音频和触觉刺激分别呈现,然后一起呈现。音频刺激包括积极和消极的语音表达,触觉刺激由在参与者手臂上进行的不同强度的喷气式触觉刺激组成。参与者被要求在连续量表上评估传达的情感效价。信息整合理论被用于模拟多模态效价感知过程。分析表明,参与者通常会整合两种信息来源来评估情感效价。主要的整合规则是平均规则。一种模态相对于另一种模态的优势因个体而异。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe41/6194168/c5064dda2ccb/fpsyg-09-01966-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验