Suppr超能文献

在人工语法学习中关于片段我们了解到了什么?一种转移概率方法。

What is learned about fragments in artificial grammar learning? A transitional probabilities approach.

作者信息

Poletiek Fenna H, Wolters Gezinus

机构信息

Unit of Cognitive Psychology, Leiden University, Leiden, The Netherlands.

出版信息

Q J Exp Psychol (Hove). 2009 May;62(5):868-76. doi: 10.1080/17470210802511188. Epub 2008 Dec 6.

Abstract

Learning local regularities in sequentially structured materials is typically assumed to be based on encoding of the frequencies of these regularities. We explore the view that transitional probabilities between elements of chunks, rather than frequencies of chunks, may be the primary factor in artificial grammar learning (AGL). The transitional probability model (TPM) that we propose is argued to provide an adaptive and parsimonious strategy for encoding local regularities in order to induce sequential structure from an input set of exemplars of the grammar. In a variant of the AGL procedure, in which participants estimated the frequencies of bigrams occurring in a set of exemplars they had been exposed to previously, participants were shown to be more sensitive to local transitional probability information than to mere pattern frequencies.

摘要

通常认为,在顺序结构化材料中学习局部规律是基于对这些规律出现频率的编码。我们探讨了这样一种观点,即组块元素之间的转移概率而非组块频率,可能是人工语法学习(AGL)中的主要因素。我们提出的转移概率模型(TPM)被认为提供了一种自适应且简洁的策略,用于编码局部规律,以便从语法示例的输入集中归纳出顺序结构。在AGL程序的一个变体中,参与者估计他们之前接触过的一组示例中出现的双词频,结果显示参与者对局部转移概率信息比单纯的模式频率更敏感。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验