Suppr超能文献

情景缓冲器中跨领域关联的维持。

The maintenance of cross-domain associations in the episodic buffer.

作者信息

Langerock Naomi, Vergauwe Evie, Barrouillet Pierre

机构信息

Faculté de Psychologie et de Sciences de l'Education, Université de Genève.

Department of Psychology, University of Missouri.

出版信息

J Exp Psychol Learn Mem Cogn. 2014 Jul;40(4):1096-109. doi: 10.1037/a0035783. Epub 2014 Feb 17.

Abstract

The episodic buffer has been described as a structure of working memory capable of maintaining multimodal information in an integrated format. Although the role of the episodic buffer in binding features into objects has received considerable attention, several of its characteristics have remained rather underexplored. This is the case for its maintenance capacity limits and its separability from domain-specific maintenance buffers. The present study addressed these questions, making use of a complex span paradigm in which participants were asked to maintain cross-domain (i.e., verbal-spatial) associations. The 1st experiment showed that the capacity limit for these cross-domain associations proved to be lower than the capacity limit for single features, and did not exceed 3. Cross-domain associations and single features depended, however, to the same extent on attentional resources for their maintenance. The 2nd experiment showed that domain-specific (verbal or spatial) resources were not involved in the maintenance of cross-domain information, revealing a clear distinction between the episodic buffer and the domain-specific buffers. Overall, in line with the episodic buffer hypothesis, these findings support the existence of a central system of limited capacity for the maintenance of cross-domain information.

摘要

情景缓冲器被描述为工作记忆的一种结构,能够以整合的形式维持多模态信息。尽管情景缓冲器在将特征绑定到对象中的作用已受到相当多的关注,但其一些特征仍未得到充分探索。其维持能力限制以及与特定领域维持缓冲器的可分离性就是这种情况。本研究利用一种复杂广度范式解决了这些问题,在该范式中,参与者被要求维持跨领域(即言语-空间)关联。第一个实验表明,这些跨领域关联的能力限制被证明低于单个特征的能力限制,且不超过3个。然而,跨领域关联和单个特征在维持方面同样依赖于注意力资源。第二个实验表明,特定领域(言语或空间)的资源不参与跨领域信息的维持,这揭示了情景缓冲器与特定领域缓冲器之间的明显区别。总体而言,与情景缓冲器假说一致,这些发现支持存在一个用于维持跨领域信息的容量有限的中央系统。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验