Suppr超能文献

有争议的刺激:将神经网络作为人类认知模型进行相互竞争。

Controversial stimuli: Pitting neural networks against each other as models of human cognition.

机构信息

Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027;

Department of Computer Science, Columbia University, New York, NY 10027.

出版信息

Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29330-29337. doi: 10.1073/pnas.1912334117.

Abstract

Distinct scientific theories can make similar predictions. To adjudicate between theories, we must design experiments for which the theories make distinct predictions. Here we consider the problem of comparing deep neural networks as models of human visual recognition. To efficiently compare models' ability to predict human responses, we synthesize controversial stimuli: images for which different models produce distinct responses. We applied this approach to two visual recognition tasks, handwritten digits (MNIST) and objects in small natural images (CIFAR-10). For each task, we synthesized controversial stimuli to maximize the disagreement among models which employed different architectures and recognition algorithms. Human subjects viewed hundreds of these stimuli, as well as natural examples, and judged the probability of presence of each digit/object category in each image. We quantified how accurately each model predicted the human judgments. The best-performing models were a generative analysis-by-synthesis model (based on variational autoencoders) for MNIST and a hybrid discriminative-generative joint energy model for CIFAR-10. These deep neural networks (DNNs), which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. None of the candidate models fully explained the human responses. Controversial stimuli generalize the concept of adversarial examples, obviating the need to assume a ground-truth model. Unlike natural images, controversial stimuli are not constrained to the stimulus distribution models are trained on, thus providing severe out-of-distribution tests that reveal the models' inductive biases. Controversial stimuli therefore provide powerful probes of discrepancies between models and human perception.

摘要

不同的科学理论可以做出相似的预测。为了在理论之间做出裁决,我们必须设计出理论可以做出明确预测的实验。在这里,我们考虑将深度神经网络作为人类视觉识别模型进行比较的问题。为了有效地比较模型预测人类反应的能力,我们合成了有争议的刺激物:不同模型产生不同反应的图像。我们将这种方法应用于两个视觉识别任务,手写数字(MNIST)和小自然图像中的物体(CIFAR-10)。对于每个任务,我们合成了有争议的刺激物,以最大限度地扩大使用不同架构和识别算法的模型之间的分歧。人类受试者观看了数百个这样的刺激物,以及自然示例,并判断每个图像中每个数字/对象类别的存在概率。我们量化了每个模型预测人类判断的准确性。表现最好的模型是 MNIST 的基于变分自动编码器的生成式分析合成模型,以及 CIFAR-10 的混合判别生成联合能量模型。这些对图像分布进行建模的深度神经网络 (DNN) 比仅学习将图像映射到标签的纯判别 DNN 表现更好。没有一个候选模型完全解释了人类的反应。有争议的刺激物推广了对抗性示例的概念,无需假设真实模型。与自然图像不同,有争议的刺激物不受模型训练的刺激分布的限制,因此提供了严格的离群测试,揭示了模型的归纳偏差。因此,有争议的刺激物为模型和人类感知之间的差异提供了有力的探测手段。

相似文献

2
Investigating object compositionality in Generative Adversarial Networks.研究生成对抗网络中的对象组合性。
Neural Netw. 2020 Oct;130:309-325. doi: 10.1016/j.neunet.2020.07.007. Epub 2020 Jul 13.
4
Divergences in color perception between deep neural networks and humans.深度神经网络与人类在颜色感知上的差异。
Cognition. 2023 Dec;241:105621. doi: 10.1016/j.cognition.2023.105621. Epub 2023 Sep 14.
5
Deep Neural Networks as a Computational Model for Human Shape Sensitivity.深度神经网络作为人类形状敏感度的计算模型
PLoS Comput Biol. 2016 Apr 28;12(4):e1004896. doi: 10.1371/journal.pcbi.1004896. eCollection 2016 Apr.
6
Deep Neural Networks for Modeling Visual Perceptual Learning.深度神经网络在视觉感知学习建模中的应用。
J Neurosci. 2018 Jul 4;38(27):6028-6044. doi: 10.1523/JNEUROSCI.1620-17.2018. Epub 2018 May 23.
8
Generative adversarial networks with decoder-encoder output noises.生成对抗网络与解码器编码器输出噪声。
Neural Netw. 2020 Jul;127:19-28. doi: 10.1016/j.neunet.2020.04.005. Epub 2020 Apr 9.

引用本文的文献

2
Beyond binding: from modular to natural vision.超越绑定:从模块化视觉到自然视觉。
Trends Cogn Sci. 2025 Jun;29(6):505-515. doi: 10.1016/j.tics.2025.03.002. Epub 2025 Apr 14.
3
How Can Deep Neural Networks Inform Theory in Psychological Science?深度神经网络如何为心理学理论提供信息?
Curr Dir Psychol Sci. 2024 Oct;33(5):325-333. doi: 10.1177/09637214241268098. Epub 2024 Sep 11.
4
The canonical deep neural network as a model for human symmetry processing.作为人类对称性处理模型的典型深度神经网络。
iScience. 2024 Dec 5;28(1):111540. doi: 10.1016/j.isci.2024.111540. eCollection 2025 Jan 17.
8
How well do models of visual cortex generalize to out of distribution samples?视觉皮层模型对分布外样本的泛化能力如何?
PLoS Comput Biol. 2024 May 31;20(5):e1011145. doi: 10.1371/journal.pcbi.1011145. eCollection 2024 May.

本文引用的文献

1
Individual differences among deep neural network models.深度神经网络模型的个体差异。
Nat Commun. 2020 Nov 12;11(1):5725. doi: 10.1038/s41467-020-19632-w.
3
Humans can decipher adversarial images.人类可以解读对抗性图像。
Nat Commun. 2019 Mar 22;10(1):1334. doi: 10.1038/s41467-019-08931-6.
4
Deep convolutional networks do not classify based on global object shape.深度卷积网络不是基于全局物体形状进行分类的。
PLoS Comput Biol. 2018 Dec 7;14(12):e1006613. doi: 10.1371/journal.pcbi.1006613. eCollection 2018 Dec.
9
Deep Neural Networks as a Computational Model for Human Shape Sensitivity.深度神经网络作为人类形状敏感度的计算模型
PLoS Comput Biol. 2016 Apr 28;12(4):e1004896. doi: 10.1371/journal.pcbi.1004896. eCollection 2016 Apr.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验