Pothos Emmanuel M
Department of Psychology, Swansea University Swansea, UK.
Front Psychol. 2010 Jun 17;1:16. doi: 10.3389/fpsyg.2010.00016. eCollection 2010.
A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy.
提出了一个模型来表征在人工语法学习(AGL)中获得的知识类型。具体而言,香农熵被用于计算AGL任务中不同测试项目相对于训练项目的复杂性。根据该模型,一个测试项目从训练项目中越可预测,就越有可能被选为与训练项目兼容。结合先前几个AGL数据集的结果对熵模型的预测进行了探究,并与其他AGL度量进行了比较。AGL中的这种特定方法与分类和推理中的类似模型高度契合,这些模型也假定认知加工旨在降低熵。