• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

比较前馈神经网络和递归神经网络架构与人类在人工语法学习中的行为。

Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning.

机构信息

CerCo, CNRS, 31055, Toulouse, France.

Laboratoire Cognition, Langues, Langage, Ergonomie, CNRS, Université Toulouse, Toulouse, France.

出版信息

Sci Rep. 2020 Dec 17;10(1):22172. doi: 10.1038/s41598-020-79127-y.

DOI:10.1038/s41598-020-79127-y
PMID:33335190
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7747619/
Abstract

In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can "learn" (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.

摘要

近年来,人工神经网络在多个领域的表现已经接近甚至超过人类:在语言处理等以前是人类特权的任务中,最新模型的表现取得了显著的进步。这项技术进步的一个优势是促进了不同神经网络和人类性能之间的比较,以便加深我们对人类认知的理解。在这里,我们研究了哪种神经网络架构(前馈与递归)在人工语法学习中与人类行为相匹配,这是语言习得的一个关键方面。先前的实验研究证明,人类受试者只需少量接触,并且通常不需要明确了解潜在规则,就可以学习人工语法。我们在人类和前馈和递归网络中测试了具有不同复杂程度的四个语法。我们的结果表明,两种架构都可以通过错误反向传播来学习语法,其学习次数与人类相同,但递归网络的表现比前馈网络更接近人类,与语法复杂程度无关。此外,与视觉处理类似,其中前馈和递归架构与无意识和有意识的过程有关,在十个正则语法上的架构性能差异表明,更简单和更明确的语法更适合递归架构学习,这支持了这样的假设,即明确学习最好由递归网络建模,而前馈网络则可以捕捉到隐含学习中涉及的动态。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/d6374a6ae7a0/41598_2020_79127_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/2bbd0bbe7b8e/41598_2020_79127_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/3caf04ff5c82/41598_2020_79127_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/7505ac47e0b5/41598_2020_79127_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/e7b3d9f62f2d/41598_2020_79127_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/d6374a6ae7a0/41598_2020_79127_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/2bbd0bbe7b8e/41598_2020_79127_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/3caf04ff5c82/41598_2020_79127_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/7505ac47e0b5/41598_2020_79127_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/e7b3d9f62f2d/41598_2020_79127_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/707a/7747619/d6374a6ae7a0/41598_2020_79127_Fig5_HTML.jpg

相似文献

1
Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning.比较前馈神经网络和递归神经网络架构与人类在人工语法学习中的行为。
Sci Rep. 2020 Dec 17;10(1):22172. doi: 10.1038/s41598-020-79127-y.
2
Role of prior knowledge in implicit and explicit learning of artificial grammars.先验知识在人工语法的内隐学习和外显学习中的作用。
Conscious Cogn. 2014 Aug;28:1-16. doi: 10.1016/j.concog.2014.06.003. Epub 2014 Jul 5.
3
An Entropy Metric for Regular Grammar Classification and Learning with Recurrent Neural Networks.一种用于正则语法分类及通过循环神经网络进行学习的熵度量。
Entropy (Basel). 2021 Jan 19;23(1):127. doi: 10.3390/e23010127.
4
Neural network processing of natural language: II. Towards a unified model of corticostriatal function in learning sentence comprehension and non-linguistic sequencing.自然语言的神经网络处理:II. 迈向学习句子理解和非语言序列中皮质纹状体功能的统一模型。
Brain Lang. 2009 May-Jun;109(2-3):80-92. doi: 10.1016/j.bandl.2008.08.002. Epub 2008 Oct 5.
5
Implicit learning of recursive context-free grammars.递归上下文无关语法的内隐学习。
PLoS One. 2012;7(10):e45885. doi: 10.1371/journal.pone.0045885. Epub 2012 Oct 19.
6
The Role of Simple Semantics in the Process of Artificial Grammar Learning.简单语义在人工语法学习过程中的作用。
J Psycholinguist Res. 2017 Oct;46(5):1285-1308. doi: 10.1007/s10936-017-9494-y.
7
Visual artificial grammar learning by rhesus macaques (Macaca mulatta): exploring the role of grammar complexity and sequence length.恒河猴(猕猴)的视觉人工语法学习:探索语法复杂性和序列长度的作用。
Anim Cogn. 2018 Mar;21(2):267-284. doi: 10.1007/s10071-018-1164-4. Epub 2018 Feb 12.
8
Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision.递归神经网络可以解释生物视觉中速度和精度的灵活交易。
PLoS Comput Biol. 2020 Oct 2;16(10):e1008215. doi: 10.1371/journal.pcbi.1008215. eCollection 2020 Oct.
9
Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning.计算建模有助于认知科学发展的五种方式:来自人工语法学习的启示
Top Cogn Sci. 2020 Jul;12(3):925-941. doi: 10.1111/tops.12474. Epub 2019 Oct 30.
10
Implicit learning of artificial grammatical structures after inferior frontal cortex lesions.下额叶皮层损伤后对人工语法结构的内隐学习。
PLoS One. 2019 Sep 20;14(9):e0222385. doi: 10.1371/journal.pone.0222385. eCollection 2019.

引用本文的文献

1
Recurrent neural network based high-precision position compensation control of magnetic levitation system.基于递归神经网络的磁悬浮系统高精度位置补偿控制。
Sci Rep. 2022 Jul 6;12(1):11435. doi: 10.1038/s41598-022-15638-0.

本文引用的文献

1
Unconscious associative learning with conscious cues.无意识联想学习与有意识线索
Neurosci Conscious. 2016 Jan;2016(1):niw016. doi: 10.1093/nc/niw016. Epub 2016 Oct 2.
2
Delay-Induced Multistability and Loop Formation in Neuronal Networks with Spike-Timing-Dependent Plasticity.时变可塑性神经元网络中的延迟诱导多稳定性和环形成。
Sci Rep. 2018 Aug 13;8(1):12068. doi: 10.1038/s41598-018-30565-9.
3
Implicit sequence learning despite multitasking: the role of across-task predictability.尽管存在多任务处理,但内隐序列学习:跨任务可预测性的作用。
Psychol Res. 2019 Apr;83(3):526-543. doi: 10.1007/s00426-017-0920-4. Epub 2017 Sep 26.
4
A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity.一种人类运动序列学习模型基于尖峰时间依赖性可塑性解释了促进和干扰效应。
PLoS Comput Biol. 2017 Aug 2;13(8):e1005632. doi: 10.1371/journal.pcbi.1005632. eCollection 2017 Aug.
5
Neuroscience-Inspired Artificial Intelligence.神经科学启发的人工智能。
Neuron. 2017 Jul 19;95(2):245-258. doi: 10.1016/j.neuron.2017.06.011.
6
Perception Science in the Age of Deep Neural Networks.深度神经网络时代的感知科学。
Front Psychol. 2017 Feb 2;8:142. doi: 10.3389/fpsyg.2017.00142. eCollection 2017.
7
Dendritic and Axonal Propagation Delays Determine Emergent Structures of Neuronal Networks with Plastic Synapses.树突和轴突传播延迟决定具有可塑性突触的神经元网络的涌现结构。
Sci Rep. 2017 Jan 3;7:39682. doi: 10.1038/srep39682.
8
Self-similarity and recursion as default modes in human cognition.人类认知中的自相似性和递归作为默认模式。
Cortex. 2017 Dec;97:183-201. doi: 10.1016/j.cortex.2016.08.016. Epub 2016 Sep 23.
9
Neural correlates of consciousness: progress and problems.意识的神经相关:进展与问题。
Nat Rev Neurosci. 2016 May;17(5):307-21. doi: 10.1038/nrn.2016.22.
10
Using goal-driven deep learning models to understand sensory cortex.利用目标驱动的深度学习模型理解感觉皮层。
Nat Neurosci. 2016 Mar;19(3):356-65. doi: 10.1038/nn.4244.