序列效应与言语处理:口音内和口音间切换说话者的认知负荷。
Sequence effects and speech processing: cognitive load for speaker-switching within and across accents.
机构信息
Department of Psychological and Brain Sciences, Washington University in St. Louis, St Louis, MO, USA.
Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebastián, Gipuzkoa, Spain.
出版信息
Psychon Bull Rev. 2024 Feb;31(1):176-186. doi: 10.3758/s13423-023-02322-1. Epub 2023 Jul 13.
Prior work in speech processing indicates that listening tasks with multiple speakers (as opposed to a single speaker) result in slower and less accurate processing. Notably, the trial-to-trial cognitive demands of switching between speakers or switching between accents have yet to be examined. We used pupillometry, a physiological index of cognitive load, to examine the demands of processing first (L1) and second (L2) language-accented speech when listening to sentences produced by the same speaker consecutively (no switch), a novel speaker of the same accent (within-accent switch), and a novel speaker with a different accent (across-accent switch). Inspired by research on sequential adjustments in cognitive control, we aimed to identify the cognitive demands of accommodating a novel speaker and accent by examining the trial-to-trial changes in pupil dilation during speech processing. Our results indicate that switching between speakers was more cognitively demanding than listening to the same speaker consecutively. Additionally, switching to a novel speaker with a different accent was more cognitively demanding than switching between speakers of the same accent. However, there was an asymmetry for across-accent switches, such that switching from an L1 to an L2 accent was more demanding than vice versa. Findings from the present study align with work examining multi-talker processing costs, and provide novel evidence that listeners dynamically adjust cognitive processing to accommodate speaker and accent variability. We discuss these novel findings in the context of an active control model and auditory streaming framework of speech processing.
先前的语音处理研究表明,与单一说话人相比,多说话人(而非单一说话人)的聆听任务会导致处理速度更慢且准确性更低。值得注意的是,目前尚未研究在说话人之间或口音之间切换的试次间认知需求。我们使用瞳孔测量法(一种认知负荷的生理指标),来检验当连续聆听同一说话人(无切换)、同一口音的新说话人(口音内切换)和不同口音的新说话人(口音间切换)所产生的句子时,处理第一语言(L1)和第二语言(L2)带口音的语音的认知需求。受认知控制的序列调整研究的启发,我们旨在通过检验在处理语音过程中瞳孔扩张的试次间变化,来识别适应新说话人和口音的认知需求。研究结果表明,在说话人之间切换比连续聆听同一说话人更具认知挑战性。此外,与口音内切换相比,切换到具有不同口音的新说话人更具认知挑战性。然而,口音间切换存在不对称性,即从 L1 口音切换到 L2 口音比反之更具挑战性。本研究的发现与多说话人处理成本的研究结果一致,并提供了新的证据,表明听者动态调整认知处理以适应说话人和口音的可变性。我们在主动控制模型和语音处理的听觉流框架的背景下讨论了这些新发现。