• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于计算信息的啮齿动物和人类在视觉物体识别策略上的比较。

A computationally informed comparison between the strategies of rodents and humans in visual object recognition.

机构信息

Department of Brain and Cognition & Leuven Brain Institute, Leuven, Belgium.

Department of Neurobiology, Harvard Medical School, Boston, United States.

出版信息

Elife. 2023 Dec 11;12:RP87719. doi: 10.7554/eLife.87719.

DOI:10.7554/eLife.87719
PMID:38079481
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10712954/
Abstract

Many species are able to recognize objects, but it has been proven difficult to pinpoint and compare how different species solve this task. Recent research suggested to combine computational and animal modelling in order to obtain a more systematic understanding of task complexity and compare strategies between species. In this study, we created a large multidimensional stimulus set and designed a visual discrimination task partially based upon modelling with a convolutional deep neural network (CNN). Experiments included rats (N = 11; 1115 daily sessions in total for all rats together) and humans (N = 45). Each species was able to master the task and generalize to a variety of new images. Nevertheless, rats and humans showed very little convergence in terms of which object pairs were associated with high and low performance, suggesting the use of different strategies. There was an interaction between species and whether stimulus pairs favoured early or late processing in a CNN. A direct comparison with CNN representations and visual feature analyses revealed that rat performance was best captured by late convolutional layers and partially by visual features such as brightness and pixel-level similarity, while human performance related more to the higher-up fully connected layers. These findings highlight the additional value of using a computational approach for the design of object recognition tasks. Overall, this computationally informed investigation of object recognition behaviour reveals a strong discrepancy in strategies between rodent and human vision.

摘要

许多物种都能够识别物体,但要准确确定并比较不同物种如何解决这个问题一直很困难。最近的研究建议将计算和动物模型相结合,以便更系统地了解任务的复杂性,并比较物种之间的策略。在这项研究中,我们创建了一个大型多维刺激集,并设计了一个视觉辨别任务,部分基于卷积深度神经网络(CNN)的建模。实验包括大鼠(N=11;所有大鼠总共进行了 1115 个日常训练)和人类(N=45)。两个物种都能够掌握任务并推广到各种新的图像。然而,大鼠和人类在哪些物体对与高绩效和低绩效相关方面几乎没有趋同,这表明它们使用了不同的策略。在 CNN 中,物种和刺激对是否有利于早期或晚期处理之间存在相互作用。与 CNN 表示和视觉特征分析的直接比较表明,大鼠的性能最好由晚期卷积层以及亮度和像素级相似性等视觉特征来捕获,而人类的性能与更高的全连接层更相关。这些发现突出了使用计算方法设计物体识别任务的额外价值。总的来说,这种对物体识别行为的计算启发式研究揭示了啮齿动物和人类视觉之间策略的强烈差异。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/40c15e809bb3/elife-87719-sa3-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/0b7005615652/elife-87719-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/7f627b4c28ce/elife-87719-fig1-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/df96a9318b74/elife-87719-fig1-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/b665d10dd88f/elife-87719-fig1-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/f8ab0af77092/elife-87719-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/9a35a6f67b6e/elife-87719-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/6af377070b40/elife-87719-fig3-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/87685673b6c5/elife-87719-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/1771fab3ea03/elife-87719-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/393758abdec1/elife-87719-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/429e0d446124/elife-87719-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/2ca9b96a1c7e/elife-87719-fig7-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/c4d1c0968937/elife-87719-fig7-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/65cee37aa7a3/elife-87719-fig7-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/7ca718f34972/elife-87719-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/c6e246d10836/elife-87719-sa3-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/40c15e809bb3/elife-87719-sa3-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/0b7005615652/elife-87719-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/7f627b4c28ce/elife-87719-fig1-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/df96a9318b74/elife-87719-fig1-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/b665d10dd88f/elife-87719-fig1-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/f8ab0af77092/elife-87719-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/9a35a6f67b6e/elife-87719-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/6af377070b40/elife-87719-fig3-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/87685673b6c5/elife-87719-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/1771fab3ea03/elife-87719-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/393758abdec1/elife-87719-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/429e0d446124/elife-87719-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/2ca9b96a1c7e/elife-87719-fig7-figsupp1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/c4d1c0968937/elife-87719-fig7-figsupp2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/65cee37aa7a3/elife-87719-fig7-figsupp3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/7ca718f34972/elife-87719-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/c6e246d10836/elife-87719-sa3-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e60e/10712954/40c15e809bb3/elife-87719-sa3-fig2.jpg

相似文献

1
A computationally informed comparison between the strategies of rodents and humans in visual object recognition.基于计算信息的啮齿动物和人类在视觉物体识别策略上的比较。
Elife. 2023 Dec 11;12:RP87719. doi: 10.7554/eLife.87719.
2
Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural Networks.大规模、高分辨率的人类、猴子和最先进的深度人工神经网络核心视觉对象识别行为比较。
J Neurosci. 2018 Aug 15;38(33):7255-7269. doi: 10.1523/JNEUROSCI.0388-18.2018. Epub 2018 Jul 13.
3
Using deep neural networks to evaluate object vision tasks in rats.使用深度神经网络评估大鼠的物体视觉任务。
PLoS Comput Biol. 2021 Mar 2;17(3):e1008714. doi: 10.1371/journal.pcbi.1008714. eCollection 2021 Mar.
4
Accuracy of Rats in Discriminating Visual Objects Is Explained by the Complexity of Their Perceptual Strategy.老鼠辨别视觉物体的准确性可以用其感知策略的复杂性来解释。
Curr Biol. 2018 Apr 2;28(7):1005-1015.e5. doi: 10.1016/j.cub.2018.02.037. Epub 2018 Mar 15.
5
Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats.物体相似度影响大鼠不变视觉物体识别背后的感知策略。
Front Neural Circuits. 2015 Mar 12;9:10. doi: 10.3389/fncir.2015.00010. eCollection 2015.
6
Neural representations of the perception of handwritten digits and visual objects from a convolutional neural network compared to humans.与人类相比,来自卷积神经网络对手写数字和视觉对象感知的神经表示。
Hum Brain Mapp. 2023 Apr 1;44(5):2018-2038. doi: 10.1002/hbm.26189. Epub 2023 Jan 13.
7
Combining convolutional neural networks and cognitive models to predict novel object recognition in humans.结合卷积神经网络和认知模型预测人类的新物体识别。
J Exp Psychol Learn Mem Cogn. 2021 May;47(5):785-807. doi: 10.1037/xlm0000968. Epub 2020 Nov 5.
8
Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition.从视觉到触觉的迁移学习:用于视触 3D 物体识别的混合深度卷积神经网络。
Sensors (Basel). 2020 Dec 27;21(1):113. doi: 10.3390/s21010113.
9
Understanding Human Object Vision: A Picture Is Worth a Thousand Representations.理解人类客体视觉:一张图片胜过千般表征。
Annu Rev Psychol. 2023 Jan 18;74:113-135. doi: 10.1146/annurev-psych-032720-041031. Epub 2022 Nov 15.
10
Common Object Representations for Visual Production and Recognition.常见的视觉生成和识别的目标表示。
Cogn Sci. 2018 Nov;42(8):2670-2698. doi: 10.1111/cogs.12676. Epub 2018 Aug 20.

引用本文的文献

1
Unraveling the complexity of rat object vision requires a full convolutional network and beyond.剖析大鼠物体视觉的复杂性需要一个全卷积网络及其他技术。
Patterns (N Y). 2025 Jan 17;6(2):101149. doi: 10.1016/j.patter.2024.101149. eCollection 2025 Feb 14.

本文引用的文献

1
Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation.小鼠视觉皮层作为一个自我学习生态通用表征的有限资源系统。
PLoS Comput Biol. 2023 Oct 2;19(10):e1011506. doi: 10.1371/journal.pcbi.1011506. eCollection 2023 Oct.
2
The importance of contrast features in rat vision.大鼠视觉中对比特征的重要性。
Sci Rep. 2023 Jan 10;13(1):459. doi: 10.1038/s41598-023-27533-3.
3
How Visual Expertise Changes Representational Geometry: A Behavioral and Neural Perspective.视觉专长如何改变表象几何:行为和神经学视角。
J Cogn Neurosci. 2021 Nov 5;33(12):2461-2476. doi: 10.1162/jocn_a_01778.
4
Training for object recognition with increasing spatial frequency: A comparison of deep learning with human vision.用增加空间频率的方法进行目标识别训练:深度学习与人眼视觉的比较。
J Vis. 2021 Sep 1;21(10):14. doi: 10.1167/jov.21.10.14.
5
Using deep neural networks to evaluate object vision tasks in rats.使用深度神经网络评估大鼠的物体视觉任务。
PLoS Comput Biol. 2021 Mar 2;17(3):e1008714. doi: 10.1371/journal.pcbi.1008714. eCollection 2021 Mar.
6
Orthogonal Representations of Object Shape and Category in Deep Convolutional Neural Networks and Human Visual Cortex.深度卷积神经网络和人类视觉皮层中物体形状和类别的正交表示。
Sci Rep. 2020 Feb 12;10(1):2453. doi: 10.1038/s41598-020-59175-0.
7
The Visual Acuity of Rats in Touchscreen Setups.触屏装置中大鼠的视力
Vision (Basel). 2019 Dec 31;4(1):4. doi: 10.3390/vision4010004.
8
Face categorization and behavioral templates in rats.大鼠的面部分类与行为模板
J Vis. 2019 Dec 2;19(14):9. doi: 10.1167/19.14.9.
9
Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior.证据表明,循环回路对于腹侧流执行核心物体识别行为至关重要。
Nat Neurosci. 2019 Jun;22(6):974-983. doi: 10.1038/s41593-019-0392-5. Epub 2019 Apr 29.
10
Nonlinear Processing of Shape Information in Rat Lateral Extrastriate Cortex.大鼠外侧纹外皮层中形状信息的非线性处理。
J Neurosci. 2019 Feb 27;39(9):1649-1670. doi: 10.1523/JNEUROSCI.1938-18.2018. Epub 2019 Jan 7.