• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

听起来像是在争吵:听众可以从自发的非言语发声中推断出行为背景。

Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations.

机构信息

Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands.

Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.

出版信息

Cogn Emot. 2024 May;38(3):277-295. doi: 10.1080/02699931.2023.2285854. Epub 2023 Nov 24.

DOI:10.1080/02699931.2023.2285854
PMID:37997898
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11057848/
Abstract

When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total  = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners ( = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.

摘要

当我们听到另一个人笑或尖叫时,我们能否判断他们所处的情况——例如,他们是在玩耍还是在打架?非言语表达被认为在行为背景下系统地变化。感知者可能对这些假设的系统映射敏感,从而能够正确地从他人的发声推断出背景。在这里,在两个预先注册的实验中,我们测试了一个预测,即听众可以从自发的非言语发声(如叹气和嘟囔)中准确推断出产生的语境(例如被挠痒痒、发现威胁)。在实验 1 中,听众(总计 3120 人)通过是/否的回答选项将 200 个非言语发声与 10 个语境中的一个匹配。通过信号检测分析,我们表明听众在将发声与九个语境匹配方面是准确的。在实验 2 中,听众(337 人)在强制选择任务中通过从 10 个回答选项中进行选择来对产生的语境进行分类。通过分析无偏的命中率,我们表明参与者在所有 10 个语境中的分类都达到了高于随机选择的水平。总之,这些结果表明,感知者可以从非言语发声中推断出语境,其速度超过随机选择,这表明听众对发声中的声学结构与行为背景之间的系统映射敏感。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a115/11057848/21dfba2f1aab/PCEM_A_2285854_F0001_OC.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a115/11057848/21dfba2f1aab/PCEM_A_2285854_F0001_OC.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a115/11057848/21dfba2f1aab/PCEM_A_2285854_F0001_OC.jpg

相似文献

1
Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations.听起来像是在争吵:听众可以从自发的非言语发声中推断出行为背景。
Cogn Emot. 2024 May;38(3):277-295. doi: 10.1080/02699931.2023.2285854. Epub 2023 Nov 24.
2
Typical vs. atypical: Combining auditory Gestalt perception and acoustic analysis of early vocalisations in Rett syndrome.典型与非典型:结合听觉整体感知和雷特综合征早期发声的声学分析。
Res Dev Disabil. 2018 Nov;82:109-119. doi: 10.1016/j.ridd.2018.02.019. Epub 2018 Mar 15.
3
Human roars communicate upper-body strength more effectively than do screams or aggressive and distressed speech.人类的咆哮比尖叫或挑衅和痛苦的言语更有效地传达上身力量。
PLoS One. 2019 Mar 4;14(3):e0213034. doi: 10.1371/journal.pone.0213034. eCollection 2019.
4
Human listeners' perception of behavioural context and core affect dimensions in chimpanzee vocalizations.人类听众对黑猩猩叫声中行为背景和核心情感维度的感知。
Proc Biol Sci. 2020 Jun 24;287(1929):20201148. doi: 10.1098/rspb.2020.1148. Epub 2020 Jun 17.
5
Can perceivers recognise emotions from spontaneous expressions?感知者能否从自发表情中识别情绪?
Cogn Emot. 2018 May;32(3):504-515. doi: 10.1080/02699931.2017.1320978. Epub 2017 Apr 27.
6
Listeners can extract meaning from non-linguistic infant vocalisations cross-culturally.听众可以跨文化地从非语言婴儿发声中提取意义。
Sci Rep. 2017 Jan 25;7:41016. doi: 10.1038/srep41016.
7
The credibility of acted screams: Implications for emotional communication research.表演性尖叫的可信度:对情感交流研究的启示
Q J Exp Psychol (Hove). 2019 Aug;72(8):1889-1902. doi: 10.1177/1747021818816307. Epub 2018 Dec 4.
8
The Perception of Spontaneous and Volitional Laughter Across 21 Societies.21 个社会中自发性和随意性笑声的感知。
Psychol Sci. 2018 Sep;29(9):1515-1525. doi: 10.1177/0956797618778235. Epub 2018 Jul 25.
9
Human emotional vocalizations can develop in the absence of auditory learning.人类情感性发声可以在缺乏听觉学习的情况下发展。
Emotion. 2020 Dec;20(8):1435-1445. doi: 10.1037/emo0000654. Epub 2019 Sep 2.
10
Analyzing nonverbal listener responses using parallel recordings of multiple listeners.使用多个听众的同步录音来分析非语言听众反应。
Cogn Process. 2012 Oct;13 Suppl 2(Suppl 2):499-506. doi: 10.1007/s10339-012-0434-3. Epub 2012 Feb 19.

引用本文的文献

1
Tickling induces a unique type of spontaneous laughter.挠痒会引发一种独特的自发性笑声。
Biol Lett. 2024 Nov;20(11):20240543. doi: 10.1098/rsbl.2024.0543. Epub 2024 Nov 20.

本文引用的文献

1
Do nonlinear vocal phenomena signal negative valence or high emotion intensity?非线性发声现象是否预示着负性效价或高情绪强度?
R Soc Open Sci. 2020 Dec 2;7(12):201306. doi: 10.1098/rsos.201306. eCollection 2020 Dec.
2
Human listeners' perception of behavioural context and core affect dimensions in chimpanzee vocalizations.人类听众对黑猩猩叫声中行为背景和核心情感维度的感知。
Proc Biol Sci. 2020 Jun 24;287(1929):20201148. doi: 10.1098/rspb.2020.1148. Epub 2020 Jun 17.
3
Raincloud plots: a multi-platform tool for robust data visualization.
雨云图:一种用于稳健数据可视化的多平台工具。
Wellcome Open Res. 2021 Jan 21;4:63. doi: 10.12688/wellcomeopenres.15191.2. eCollection 2019.
4
Loud and unclear: Intense real-life vocalizations during affective situations are perceptually ambiguous and contextually malleable.大声而不清晰:情感情境中强烈的真实声音具有知觉上的模糊性和语境上的可塑。
J Exp Psychol Gen. 2019 Oct;148(10):1842-1848. doi: 10.1037/xge0000535. Epub 2018 Dec 27.
5
Mapping 24 emotions conveyed by brief human vocalization.人类短暂发声所传达的 24 种情绪的映射。
Am Psychol. 2019 Sep;74(6):698-712. doi: 10.1037/amp0000399. Epub 2018 Dec 20.
6
Form and Function in Human Song.人类歌声的形式与功能。
Curr Biol. 2018 Feb 5;28(3):356-368.e5. doi: 10.1016/j.cub.2017.12.042. Epub 2018 Jan 25.
7
Human Novelty Response to Emotional Animal Vocalizations: Effects of Phylogeny and Familiarity.人类对情绪化动物叫声的新奇反应:系统发育和熟悉度的影响。
Front Behav Neurosci. 2017 Oct 24;11:204. doi: 10.3389/fnbeh.2017.00204. eCollection 2017.
8
Towards a social functional account of laughter: Acoustic features convey reward, affiliation, and dominance.迈向对笑的社会功能解释:声学特征传达奖励、归属感和支配地位。
PLoS One. 2017 Aug 29;12(8):e0183811. doi: 10.1371/journal.pone.0183811. eCollection 2017.
9
Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: evidence for acoustic universals.人类能够识别所有陆地脊椎动物类群发声中的情绪唤醒:声学普遍性的证据。
Proc Biol Sci. 2017 Jul 26;284(1859). doi: 10.1098/rspb.2017.0990.
10
Can perceivers recognise emotions from spontaneous expressions?感知者能否从自发表情中识别情绪?
Cogn Emot. 2018 May;32(3):504-515. doi: 10.1080/02699931.2017.1320978. Epub 2017 Apr 27.