• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人类类别学习中的注意偏向:深度学习的案例

Attentional Bias in Human Category Learning: The Case of Deep Learning.

作者信息

Hanson Catherine, Caglar Leyla Roskan, Hanson Stephen José

机构信息

Rutgers Brain Imaging Center, Newark, NJ, United States.

RUBIC and Psychology Department and Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ, United States.

出版信息

Front Psychol. 2018 Apr 13;9:374. doi: 10.3389/fpsyg.2018.00374. eCollection 2018.

DOI:10.3389/fpsyg.2018.00374
PMID:29706907
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5909172/
Abstract

Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.

摘要

类别学习表现受到类别结构的性质以及学习过程中类别特征的处理方式的影响。谢泼德(1964年,1987年)表明,刺激可以具有在类别内特征在统计上不相关(可分离)或统计相关(整体)的结构。人类发现学习具有可分离特征的类别要容易得多,特别是当只需要关注相关特征的一个子集时,而学习具有整体特征的类别则更难,这需要考虑所有可用特征并整合满足类别规则的所有相关类别特征(加纳,1974年)。与人类不同,已证明单个隐藏层反向传播(BP)神经网络能够同样轻松地学习可分离和整体类别,而与类别规则无关(克鲁施克,1993年)。这种无法复制人类类别表现的情况似乎有力地证明了联结主义网络无法模拟人类的注意力偏差。我们通过两种方式测试了网络中注意力偏差的假定局限性:(1)让网络学习具有高特征复杂性示例的类别,这与之前使用的低维刺激形成对比;(2)研究在许多不同类型任务(语言翻译、自动驾驶等)中已表现出类人性能的深度学习(DL)网络在类别学习过程中是否会表现出类人的注意力偏差。我们能够展示出一些有趣的结果。首先,当使用低维刺激时,我们复制了BP在差异处理整体和可分离类别结构方面的失败(加纳,1974年;克鲁施克,1993年)。其次,我们表明,使用相同的低维刺激,深度学习(DL)与BP不同但与人类相似,学习可分离类别结构比学习整体类别结构更快。第三,我们表明,当使用高维刺激(面部示例)时,即使是BP在整体和可分离类别结构之间也能表现出类人的学习差异。在可视化隐藏单元表示之后,我们得出结论,DL似乎由于特征发展而扩展了初始学习,从而通过在后续层中逐步细化特征检测器来减少破坏性特征竞争,直到达到一个临界点(就误差而言),从而导致快速的渐近学习。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/1064bc96c8fc/fpsyg-09-00374-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/bffc8406fb1e/fpsyg-09-00374-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/393298aa0c93/fpsyg-09-00374-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/911b218e7a69/fpsyg-09-00374-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/aadb66606afc/fpsyg-09-00374-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/75acb3f4f524/fpsyg-09-00374-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/8bba579ee073/fpsyg-09-00374-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/1064bc96c8fc/fpsyg-09-00374-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/bffc8406fb1e/fpsyg-09-00374-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/393298aa0c93/fpsyg-09-00374-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/911b218e7a69/fpsyg-09-00374-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/aadb66606afc/fpsyg-09-00374-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/75acb3f4f524/fpsyg-09-00374-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/8bba579ee073/fpsyg-09-00374-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58cd/5909172/1064bc96c8fc/fpsyg-09-00374-g0007.jpg

相似文献

1
Attentional Bias in Human Category Learning: The Case of Deep Learning.人类类别学习中的注意偏向:深度学习的案例
Front Psychol. 2018 Apr 13;9:374. doi: 10.3389/fpsyg.2018.00374. eCollection 2018.
2
REFRESH: A new approach to modeling dimensional biases in perceptual similarity and categorization.REFRESH:一种新的方法,用于对感知相似性和分类中的维度偏差进行建模。
Psychol Rev. 2021 Nov;128(6):1145-1186. doi: 10.1037/rev0000310. Epub 2021 Sep 13.
3
Unsupervised category learning with integral-dimension stimuli.使用整体维度刺激的无监督类别学习。
Q J Exp Psychol (Hove). 2012;65(8):1537-62. doi: 10.1080/17470218.2012.658821. Epub 2012 Apr 16.
4
Comparing methods of category learning: Classification versus feature inference.比较类别学习方法:分类与特征推理。
Mem Cognit. 2020 Jul;48(5):710-730. doi: 10.3758/s13421-020-01022-8.
5
Novel representations that support rule-based categorization are acquired on-the-fly during category learning.支持基于规则分类的新表征在类别学习过程中即时获得。
Psychol Res. 2019 Apr;83(3):544-566. doi: 10.1007/s00426-019-01157-7. Epub 2019 Feb 26.
6
Category-Biased Neural Representations Form Spontaneously during Learning That Emphasizes Memory for Specific Instances.类别偏向的神经表示在强调特定实例记忆的学习过程中自发形成。
J Neurosci. 2022 Feb 2;42(5):865-876. doi: 10.1523/JNEUROSCI.1396-21.2021. Epub 2021 Dec 22.
7
Learning about the internal structure of categories through classification and feature inference.通过分类和特征推理了解类别的内部结构。
Q J Exp Psychol (Hove). 2014;67(9):1786-807. doi: 10.1080/17470218.2013.871567. Epub 2014 Mar 3.
8
Accessing similarity and dimensional relations: effects of integrality and separability on the discovery of complex concepts.获取相似性和维度关系:整体性与可分性对复杂概念发现的影响。
J Exp Psychol Gen. 1979 Jun;108(2):133-50. doi: 10.1037//0096-3445.108.2.133.
9
Category learning in older adulthood: A study of the Shepard, Hovland, and Jenkins (1961) tasks.老年人的类别学习:对谢泼德、霍夫兰德和詹金斯(1961年)任务的一项研究。
Psychol Aging. 2016 Mar;31(2):185-197. doi: 10.1037/pag0000071. Epub 2016 Jan 14.
10
Combining exemplar-based category representations and connectionist learning rules.结合基于范例的类别表征与联结主义学习规则。
J Exp Psychol Learn Mem Cogn. 1992 Mar;18(2):211-33. doi: 10.1037//0278-7393.18.2.211.

引用本文的文献

1
Analysis of gaze patterns during facade inspection to understand inspector sense-making processes.分析在外观检查过程中的注视模式,以了解检查人员的意义建构过程。
Sci Rep. 2023 Feb 20;13(1):2929. doi: 10.1038/s41598-023-29950-w.
2
What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective.计算模型能从人类选择性注意中学到什么?从视听单峰和跨峰视角的综述。
Front Integr Neurosci. 2020 Feb 27;14:10. doi: 10.3389/fnint.2020.00010. eCollection 2020.

本文引用的文献

1
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
2
The divergent autoencoder (DIVA) model of category learning.类别学习的发散自编码器(DIVA)模型。
Psychon Bull Rev. 2007 Aug;14(4):560-76. doi: 10.3758/bf03196806.
3
A fast learning algorithm for deep belief nets.一种用于深度信念网络的快速学习算法。
Neural Comput. 2006 Jul;18(7):1527-54. doi: 10.1162/neco.2006.18.7.1527.
4
SUSTAIN: a network model of category learning.可持续性:类别学习的网络模型。
Psychol Rev. 2004 Apr;111(2):309-32. doi: 10.1037/0033-295X.111.2.309.
5
INFORMATION REDUCTION IN THE ANALYSIS OF SEQUENTIAL TASKS.序列任务分析中的信息简化
Psychol Rev. 1964 Nov;71:491-504. doi: 10.1037/h0041120.
6
Feature binding, attention and object perception.特征绑定、注意力与物体感知。
Philos Trans R Soc Lond B Biol Sci. 1998 Aug 29;353(1373):1295-306. doi: 10.1098/rstb.1998.0284.
7
Toward a universal law of generalization for psychological science.迈向心理学科学的普遍概括法则。
Science. 1987 Sep 11;237(4820):1317-23. doi: 10.1126/science.3629243.
8
Learning as accumulation: a reexamination of the learning curve.作为积累的学习:对学习曲线的重新审视
Psychol Bull. 1978 Nov;85(6):1256-74.