• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度神经网络的代表性距离学习

Representational Distance Learning for Deep Neural Networks.

作者信息

McClure Patrick, Kriegeskorte Nikolaus

机构信息

MRC Cognition and Brain Sciences Unit Cambridge, UK.

出版信息

Front Comput Neurosci. 2016 Dec 27;10:131. doi: 10.3389/fncom.2016.00131. eCollection 2016.

DOI:10.3389/fncom.2016.00131
PMID:28082889
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5187453/
Abstract

Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.

摘要

深度神经网络(DNN)提供了视觉表征转换的有用模型。我们提出了一种方法,使DNN(学生网络)能够从参考模型(教师网络)的内部表征空间中学习,该参考模型可以是另一个DNN,或者在未来是生物大脑。学生网络和教师网络的表征空间由表征距离矩阵(RDM)来表征。我们提出了表征距离学习(RDL),这是一种随机梯度下降方法,可驱动学生网络的RDM去逼近教师网络的RDM。我们证明,对于两个公开可用的基准计算机视觉数据集(MNIST和CIFAR - 100),RDL与其他迁移学习技术具有竞争力,同时允许学生网络和教师网络之间存在架构差异。通过将学生网络的RDM拉向教师网络的RDM,与未使用迁移学习的基线网络相比,RDL显著提高了视觉分类性能。未来,RDL可能会实现使用任务约束(如图像和类别标签)以及来自大脑活动测量的约束对深度神经网络进行联合监督训练,从而构建能够复制生物大脑内部表征空间的模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/f3614c9b081a/fncom-10-00131-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/a7a4e572768e/fncom-10-00131-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/ca829f406f6c/fncom-10-00131-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/5b3d2ccbab45/fncom-10-00131-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/6480995dfda5/fncom-10-00131-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/d7e63a6482ae/fncom-10-00131-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/f3614c9b081a/fncom-10-00131-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/a7a4e572768e/fncom-10-00131-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/ca829f406f6c/fncom-10-00131-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/5b3d2ccbab45/fncom-10-00131-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/6480995dfda5/fncom-10-00131-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/d7e63a6482ae/fncom-10-00131-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9e58/5187453/f3614c9b081a/fncom-10-00131-g0006.jpg

相似文献

1
Representational Distance Learning for Deep Neural Networks.深度神经网络的代表性距离学习
Front Comput Neurosci. 2016 Dec 27;10:131. doi: 10.3389/fncom.2016.00131. eCollection 2016.
2
Feature-reweighted representational similarity analysis: A method for improving the fit between computational models, brains, and behavior.特征加权代表性相似性分析:一种提高计算模型、大脑和行为之间拟合度的方法。
Neuroimage. 2022 Aug 15;257:119294. doi: 10.1016/j.neuroimage.2022.119294. Epub 2022 May 14.
3
Manipulating and measuring variation in deep neural network (DNN) representations of objects.操作和测量物体的深度神经网络(DNN)表示中的变异性。
Cognition. 2024 Nov;252:105920. doi: 10.1016/j.cognition.2024.105920. Epub 2024 Aug 19.
4
Complementary label learning based on knowledge distillation.基于知识蒸馏的互补标签学习。
Math Biosci Eng. 2023 Sep 19;20(10):17905-17918. doi: 10.3934/mbe.2023796.
5
Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.基于异构数据和少量局部标注的深度卷积神经网络的半监督学习:前列腺组织病理学图像分类实验。
Med Image Anal. 2021 Oct;73:102165. doi: 10.1016/j.media.2021.102165. Epub 2021 Jul 14.
6
Densely Distilled Flow-Based Knowledge Transfer in Teacher-Student Framework for Image Classification.用于图像分类的师生框架中基于密集蒸馏流的知识转移
IEEE Trans Image Process. 2020 Apr 6. doi: 10.1109/TIP.2020.2984362.
7
Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments.在解释物体相似性判断方面,深度卷积神经网络的表现优于基于特征的模型,但不优于分类模型。
Front Psychol. 2017 Oct 9;8:1726. doi: 10.3389/fpsyg.2017.01726. eCollection 2017.
8
Deep Neural Networks and Visuo-Semantic Models Explain Complementary Components of Human Ventral-Stream Representational Dynamics.深度神经网络和视语义模型解释了人类腹侧流表象动态的互补组成部分。
J Neurosci. 2023 Mar 8;43(10):1731-1741. doi: 10.1523/JNEUROSCI.1424-22.2022. Epub 2023 Feb 9.
9
DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains.DNNBrain:用于映射深度神经网络与大脑的统一工具箱。
Front Comput Neurosci. 2020 Nov 30;14:580632. doi: 10.3389/fncom.2020.580632. eCollection 2020.
10
Divergences in color perception between deep neural networks and humans.深度神经网络与人类在颜色感知上的差异。
Cognition. 2023 Dec;241:105621. doi: 10.1016/j.cognition.2023.105621. Epub 2023 Sep 14.

引用本文的文献

1
The futuristic manifolds of REM sleep.快速眼动睡眠的未来主义多元形态。
J Sleep Res. 2025 Apr;34(2):e14271. doi: 10.1111/jsr.14271. Epub 2024 Jun 22.
2
Neurons in auditory cortex integrate information within constrained temporal windows that are invariant to the stimulus context and information rate.听觉皮层中的神经元在受限的时间窗口内整合信息,这些时间窗口对刺激背景和信息速率具有不变性。
bioRxiv. 2025 Feb 14:2025.02.14.637944. doi: 10.1101/2025.02.14.637944.
3
in the orbitofrontal cortex explains how loss aversion adapts to the ranges of gain and loss prospects.

本文引用的文献

1
Using goal-driven deep learning models to understand sensory cortex.利用目标驱动的深度学习模型理解感觉皮层。
Nat Neurosci. 2016 Mar;19(3):356-65. doi: 10.1038/nn.4244.
2
Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream.深度神经网络揭示了腹侧流中神经表征复杂性的梯度变化。
J Neurosci. 2015 Jul 8;35(27):10005-14. doi: 10.1523/JNEUROSCI.5023-14.2015.
3
Deep supervised, but not unsupervised, models may explain IT cortical representation.深度监督模型而非无监督模型可能解释IT皮层表征。
眶额皮质中的情况解释了损失厌恶如何适应收益和损失前景的范围。
Elife. 2024 Dec 9;13:e80979. doi: 10.7554/eLife.80979.
4
Integrative processing in artificial and biological vision predicts the perceived beauty of natural images.人工和生物视觉中的综合处理预测了自然图像的感知美。
Sci Adv. 2024 Mar;10(9):eadi9294. doi: 10.1126/sciadv.adi9294. Epub 2024 Mar 1.
5
Editorial: Auditory perception and phantom perception in brains, minds and machines.社论:大脑、心智与机器中的听觉感知与幻听感知
Front Neurosci. 2023 Oct 6;17:1293552. doi: 10.3389/fnins.2023.1293552. eCollection 2023.
6
Representational formats of human memory traces.人类记忆痕迹的表象形式。
Brain Struct Funct. 2024 Apr;229(3):513-529. doi: 10.1007/s00429-023-02636-9. Epub 2023 Apr 6.
7
Challenging the Classical View: Recognition of Identity and Expression as Integrated Processes.挑战传统观点:将身份识别与表达视为整合过程
Brain Sci. 2023 Feb 10;13(2):296. doi: 10.3390/brainsci13020296.
8
Effects of neuromodulation-inspired mechanisms on the performance of deep neural networks in a spatial learning task.神经调节启发机制对深度神经网络在空间学习任务中表现的影响。
iScience. 2023 Jan 23;26(2):106026. doi: 10.1016/j.isci.2023.106026. eCollection 2023 Feb 17.
9
Deep Neural Networks and Visuo-Semantic Models Explain Complementary Components of Human Ventral-Stream Representational Dynamics.深度神经网络和视语义模型解释了人类腹侧流表象动态的互补组成部分。
J Neurosci. 2023 Mar 8;43(10):1731-1741. doi: 10.1523/JNEUROSCI.1424-22.2022. Epub 2023 Feb 9.
10
How can artificial neural networks approximate the brain?人工神经网络如何模拟大脑?
Front Psychol. 2023 Jan 9;13:970214. doi: 10.3389/fpsyg.2022.970214. eCollection 2022.
PLoS Comput Biol. 2014 Nov 6;10(11):e1003915. doi: 10.1371/journal.pcbi.1003915. eCollection 2014 Nov.
4
Performance-optimized hierarchical models predict neural responses in higher visual cortex.性能优化的层次模型预测高级视觉皮层中的神经反应。
Proc Natl Acad Sci U S A. 2014 Jun 10;111(23):8619-24. doi: 10.1073/pnas.1403112111. Epub 2014 May 8.
5
A toolbox for representational similarity analysis.用于表征相似性分析的工具箱。
PLoS Comput Biol. 2014 Apr 17;10(4):e1003553. doi: 10.1371/journal.pcbi.1003553. eCollection 2014 Apr.
6
Encoding and decoding in fMRI.功能磁共振成像中的编码和解码。
Neuroimage. 2011 May 15;56(2):400-10. doi: 10.1016/j.neuroimage.2010.07.073. Epub 2010 Aug 4.
7
Representational similarity analysis - connecting the branches of systems neuroscience.表象相似性分析——连接系统神经科学的分支。
Front Syst Neurosci. 2008 Nov 24;2:4. doi: 10.3389/neuro.06.004.2008. eCollection 2008.
8
Note on the correction for continuity in testing the significance of the difference between correlated proportions.关于在检验相关比例差异的显著性时连续性校正的说明。
Psychometrika. 1948 Sep;13(3):185-7. doi: 10.1007/BF02289261.