The University of Texas at Austin, Department of Speech, Language, and Hearing Sciences, USA.
The University of Texas at Austin, Department of Speech, Language, and Hearing Sciences, USA.
J Fluency Disord. 2021 Dec;70:105846. doi: 10.1016/j.jfludis.2021.105846. Epub 2021 Mar 26.
The purpose of this study was to investigate working memory in adults who do (AWS) and do not (AWNS) stutter using a visual N-back task. Processes involved in an N-back task include encoding, storing, rehearsing, inhibition, temporal ordering, and matching.
Fifteen AWS (11 males, 4 females; M = 23.27 years, SD = 5.68 years) and 15 AWNS (M = 23.47 years, SD = 6.21 years) were asked to monitor series of images and respond by pressing a "yes" button if the image they viewed was the same as the image one, two, or three trials back. Stimuli included images with phonologically similar (i.e., phonological condition) or phonologically dissimilar (i.e., neutral condition) names. Accuracy and manual reaction time (mRT) were analyzed.
No difference was found between AWS and AWNS in accuracy. Furthermore, both groups were more accurate and significantly faster in 1- followed by 2- followed by 3-back trials. Finally, AWNS demonstrated faster mRT in the phonological compared to neutral condition, whereas AWS did not.
Results from this study suggest different processing mechanisms between AWS and AWNS for visually presented phonologically similar stimuli. Specifically, a phonological priming effect occurred in AWNS but not in AWS, potentially due to reduced spreading activation and organization in the mental lexicon of AWS. However, the lack of differences between AWS and AWNS across all N-back levels does not support deficits in AWS in aspects of working memory targeted through a visual N-back task; but, these results are preliminary and additional research is warranted.
本研究旨在使用视觉 N 回任务调查口吃者(AWS)和非口吃者(AWNS)的工作记忆。N 回任务涉及的过程包括编码、存储、复述、抑制、时间排序和匹配。
要求 15 名 AWS(11 名男性,4 名女性;M = 23.27 岁,SD = 5.68 岁)和 15 名 AWNS(M = 23.47 岁,SD = 6.21 岁)监测一系列图像,并通过按下“是”按钮来回答问题,如果他们看到的图像与前一、前二或前三张图像相同。刺激包括具有语音相似(即语音条件)或语音不同(即中性条件)名称的图像。分析准确性和手动反应时间(mRT)。
AWS 和 AWNS 在准确性上没有差异。此外,两组在 1 回、2 回和 3 回试验中都更加准确且反应时间显著缩短。最后,AWNS 在语音条件下的 mRT 比中性条件下更快,而 AWS 则没有。
本研究结果表明,AWS 和 AWNS 在处理视觉呈现的语音相似刺激时具有不同的处理机制。具体来说,在 AWNS 中出现了语音启动效应,但在 AWS 中没有,这可能是由于 AWS 心理词汇中的扩散激活和组织减少所致。然而,AWS 和 AWNS 在所有 N 回水平上都没有差异,这并不支持 AWS 在视觉 N 回任务针对的工作记忆方面存在缺陷;但是,这些结果是初步的,需要进一步研究。