• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

低水平特征选择对图像识别泛化的因果重要性。

Causal importance of low-level feature selectivity for generalization in image recognition.

机构信息

Department of Physiology, The University of Tokyo School of Medicine, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan.

出版信息

Neural Netw. 2020 May;125:185-193. doi: 10.1016/j.neunet.2020.02.009. Epub 2020 Feb 24.

DOI:10.1016/j.neunet.2020.02.009
PMID:32145648
Abstract

Although our brain and deep neural networks (DNNs) can perform high-level sensory-perception tasks, such as image or speech recognition, the inner mechanism of these hierarchical information-processing systems is poorly understood in both neuroscience and machine learning. Recently, Morcos et al. (2018) examined the effect of class-selective units in DNNs, i.e., units with high-level selectivity, on network generalization, concluding that hidden units that are selectively activated by specific input patterns may harm the network's performance. In this study, we revisited their hypothesis, considering units with selectivity for lower-level features, and argue that selective units are not always harmful to the network performance. Specifically, by using DNNs trained for image classification, we analyzed the orientation selectivity of individual units, a low-level selectivity widely studied in visual neuroscience. We found that orientation-selective units exist in both lower and higher layers of these DNNs, as in our brain. In particular, units in lower layers became more orientation-selective as the generalization performance improved during the course of training. Consistently, networks that generalized better were more orientation-selective in the lower layers. We finally revealed that ablating these selective units in the lower layers substantially degraded the generalization performance of the networks, at least by disrupting the shift-invariance of the higher layers. These results suggest that orientation selectivity can play a causally important role in object recognition, and that, contrary to the triviality of units with high-level selectivity, lower-layer units with selectivity for low-level features may be indispensable for generalization, at least for the several network architectures.

摘要

尽管我们的大脑和深度神经网络 (DNN) 可以执行高级的感觉感知任务,例如图像或语音识别,但在神经科学和机器学习中,这些分层信息处理系统的内部机制仍了解甚少。最近,Morcos 等人(2018 年)研究了 DNN 中类选择性单元的作用,即具有高级选择性的单元,对网络泛化的影响,得出结论认为,对特定输入模式具有选择性激活的隐藏单元可能会损害网络的性能。在这项研究中,我们重新审视了他们的假设,考虑了对较低层次特征具有选择性的单元,并认为选择性单元并不总是对网络性能有害。具体来说,我们使用经过图像分类训练的 DNN 分析了单个单元的方向选择性,这是视觉神经科学中广泛研究的一种低层次选择性。我们发现,这些 DNN 中存在于较低和较高层的具有方向选择性的单元,与我们的大脑中的情况相同。特别是,在训练过程中,随着泛化性能的提高,较低层的单元变得更加具有方向选择性。一致地,泛化性能更好的网络在较低层的方向选择性更高。我们最终揭示,在较低层中消除这些选择性单元会大大降低网络的泛化性能,至少会破坏较高层的不变性。这些结果表明,方向选择性可以在对象识别中起因果重要作用,并且与高层选择性单元的琐碎性相反,具有低级特征选择性的较低层单元对于泛化可能是必不可少的,至少对于几种网络架构而言。

相似文献

1
Causal importance of low-level feature selectivity for generalization in image recognition.低水平特征选择对图像识别泛化的因果重要性。
Neural Netw. 2020 May;125:185-193. doi: 10.1016/j.neunet.2020.02.009. Epub 2020 Feb 24.
2
Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks.使用由深度信念网络初始化的深度神经网络对功能磁共振成像(fMRI)体积进行特定任务特征提取和分类:基于感觉运动任务的评估
Neuroimage. 2017 Jan 15;145(Pt B):314-328. doi: 10.1016/j.neuroimage.2016.04.003. Epub 2016 Apr 11.
3
Deciphering image contrast in object classification deep networks.解析目标分类深度网络中的图像对比度。
Vision Res. 2020 Aug;173:61-76. doi: 10.1016/j.visres.2020.04.015. Epub 2020 May 29.
4
Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.从灵长类动物AIT神经元到DNN神经元对图像对象刺激的视觉反应统计。
Neural Comput. 2018 Feb;30(2):447-476. doi: 10.1162/neco_a_01039. Epub 2017 Nov 21.
5
Crowding in humans is unlike that in convolutional neural networks.人群拥挤的情况与卷积神经网络不同。
Neural Netw. 2020 Jun;126:262-274. doi: 10.1016/j.neunet.2020.03.021. Epub 2020 Mar 27.
6
A brain-inspired network architecture for cost-efficient object recognition in shallow hierarchical neural networks.一种用于浅层层次神经网络中经济高效的目标识别的脑启发式网络架构。
Neural Netw. 2021 Feb;134:76-85. doi: 10.1016/j.neunet.2020.11.013. Epub 2020 Nov 28.
7
Can multisensory training aid visual learning? A computational investigation.多感官训练能否辅助视觉学习?一项计算研究。
J Vis. 2019 Sep 3;19(11):1. doi: 10.1167/19.11.1.
8
Three approaches to facilitate invariant neurons and generalization to out-of-distribution orientations and illuminations.三种促进不变神经元的方法,以及推广到分布外的方向和光照。
Neural Netw. 2022 Nov;155:119-143. doi: 10.1016/j.neunet.2022.07.026. Epub 2022 Jul 30.
9
Noise-trained deep neural networks effectively predict human vision and its neural responses to challenging images.经噪声训练的深度神经网络能有效预测人类视觉及其对挑战性图像的神经反应。
PLoS Biol. 2021 Dec 9;19(12):e3001418. doi: 10.1371/journal.pbio.3001418. eCollection 2021 Dec.
10
Evolutionary optimization of a hierarchical object recognition model.层次化目标识别模型的进化优化
IEEE Trans Syst Man Cybern B Cybern. 2005 Jun;35(3):426-37. doi: 10.1109/tsmcb.2005.846649.

引用本文的文献

1
Predicting extremely low body weight from 12-lead electrocardiograms using a deep neural network.使用深度神经网络预测 12 导联心电图中的极低体重。
Sci Rep. 2024 Feb 26;14(1):4696. doi: 10.1038/s41598-024-55453-3.