Suppr超能文献

如关联读出模型所预测的那样,在语境中记忆单词。

Remembering words in context as predicted by an associative read-out model.

作者信息

Hofmann Markus J, Kuchinke Lars, Biemann Chris, Tamm Sascha, Jacobs Arthur M

机构信息

Neurocognitive Psychology, Department of Psychology, Freie Universität Berlin Berlin, Germany.

出版信息

Front Psychol. 2011 Oct 4;2:252. doi: 10.3389/fpsyg.2011.00252. eCollection 2011.

Abstract

Interactive activation models (IAMs) simulate orthographic and phonological processes in implicit memory tasks, but they neither account for associative relations between words nor explicit memory performance. To overcome both limitations, we introduce the associative read-out model (AROM), an IAM extended by an associative layer implementing long-term associations between words. According to Hebbian learning, two words were defined as "associated" if they co-occurred significantly often in the sentences of a large corpus. In a study-test task, a greater amount of associated items in the stimulus set increased the "yes" response rates of non-learned and learned words. To model test-phase performance, the associative layer is initialized with greater activation for learned than for non-learned items. Because IAMs scale inhibitory activation changes by the initial activation, learned items gain a greater signal variability than non-learned items, irrespective of the choice of the free parameters. This explains why the slope of the z-transformed receiver-operating characteristics (z-ROCs) is lower one during recognition memory. When fitting the model to the empirical z-ROCs, it likewise predicted which word is recognized with which probability at the item-level. Since many of the strongest associates reflect semantic relations to the presented word (e.g., synonymy), the AROM merges form-based aspects of meaning representation with meaning relations between words.

摘要

交互式激活模型(IAMs)在隐式记忆任务中模拟正字法和语音过程,但它们既没有考虑单词之间的联想关系,也没有考虑显式记忆表现。为了克服这两个局限性,我们引入了联想读出模型(AROM),这是一种通过实现单词之间长期联想的联想层扩展的IAM。根据赫布学习理论,如果两个单词在一个大型语料库的句子中频繁共同出现,那么它们就被定义为“相关联”。在一项学习-测试任务中,刺激集中更多的相关项目提高了未学习和已学习单词的“是”反应率。为了模拟测试阶段的表现,联想层对已学习项目的初始激活要大于未学习项目。因为IAMs通过初始激活来缩放抑制性激活变化,所以无论自由参数如何选择,已学习项目比未学习项目获得更大的信号变异性。这就解释了为什么在识别记忆过程中,z变换后的接收者操作特征(z-ROCs)的斜率较低。当将模型拟合到经验z-ROCs时,它同样可以在项目层面预测哪个单词以何种概率被识别。由于许多最强的联想反映了与呈现单词的语义关系(例如,同义关系),AROM将基于形式的意义表示方面与单词之间的意义关系合并在一起。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/513e/3185299/15038ba9850e/fpsyg-02-00252-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验