Suppr超能文献

一种用于人工语法学习的熵模型。

An entropy model for artificial grammar learning.

作者信息

Pothos Emmanuel M

机构信息

Department of Psychology, Swansea University Swansea, UK.

出版信息

Front Psychol. 2010 Jun 17;1:16. doi: 10.3389/fpsyg.2010.00016. eCollection 2010.

Abstract

A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy.

摘要

提出了一个模型来表征在人工语法学习(AGL)中获得的知识类型。具体而言,香农熵被用于计算AGL任务中不同测试项目相对于训练项目的复杂性。根据该模型,一个测试项目从训练项目中越可预测,就越有可能被选为与训练项目兼容。结合先前几个AGL数据集的结果对熵模型的预测进行了探究,并与其他AGL度量进行了比较。AGL中的这种特定方法与分类和推理中的类似模型高度契合,这些模型也假定认知加工旨在降低熵。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6fd7/3095384/1384d2f0911f/fpsyg-01-00016-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验