Suppr超能文献

人类在学习或生成中心嵌入序列时会使用下推栈吗?

Do Humans Use Push-Down Stacks When Learning or Producing Center-Embedded Sequences?

作者信息

Ferrigno Stephen, Cheyette Samuel J, Carey Susan

机构信息

Department of Psychology, University of Wisconsin-Madison.

Department of Psychology, Harvard University.

出版信息

Cogn Sci. 2025 Sep;49(9):e70112. doi: 10.1111/cogs.70112.

Abstract

Complex sequences are ubiquitous in human mental life, structuring representations within many different cognitive domains-natural language, music, mathematics, and logic, to name a few. However, the representational and computational machinery used to learn abstract grammars and process complex sequences is unknown. Here, we used an artificial grammar learning task to study how adults abstract center-embedded and cross-serial grammars that generalize beyond the level of embedding of the training sequences. We tested untrained generalizations to longer sequence lengths and used error patterns, item-to-item response times, and a Bayesian mixture model to test two possible memory architectures that might underlie the sequence representations of each grammar: stacks and queues. We find that adults learned both grammars, that the cross-serial grammar was easier to learn and produce than the matched center-embedded grammar, and that item-to-item touch times during sequence generation differed systematically between the two types of sequences. Contrary to widely held assumptions, we find no evidence that a stack architecture is used to generate center-embedded sequences in an indexed AB artificial grammar. Instead, the data and modeling converged on the conclusion that both center-embedded and cross-serial sequences are generated using a queue memory architecture. In this study, participants stored items in a first-in-first-out memory architecture and then accessed them via an iterative search over the stored list to generate the matched base pairs of center-embedded or cross-serial sequences.

摘要

复杂序列在人类心理生活中无处不在,构建了许多不同认知领域内的表征——自然语言、音乐、数学和逻辑等等。然而,用于学习抽象语法和处理复杂序列的表征及计算机制尚不清楚。在此,我们使用人工语法学习任务来研究成年人如何抽象出中心嵌入和交叉序列语法,这些语法能够推广到超出训练序列嵌入水平的层面。我们测试了对更长序列长度的未训练推广,并使用错误模式、项对项反应时间以及贝叶斯混合模型来测试两种可能构成每种语法序列表征基础的记忆架构:栈和队列。我们发现成年人学会了这两种语法,交叉序列语法比匹配的中心嵌入语法更容易学习和生成,并且在序列生成过程中,两种类型序列的项对项接触时间存在系统性差异。与广泛持有的假设相反,我们没有发现证据表明在索引式AB人工语法中使用栈架构来生成中心嵌入序列。相反,数据和建模得出的结论是,中心嵌入序列和交叉序列均使用队列记忆架构生成。在本研究中,参与者将项目存储在先进先出的记忆架构中,然后通过对存储列表进行迭代搜索来访问它们,以生成中心嵌入或交叉序列的匹配碱基对。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验