Department of General Psychology, Justus-Liebig University, D-35394 Giessen, Germany.
Department of General Psychology, Justus-Liebig University, D-35394 Giessen, Germany.
Vision Res. 2020 Aug;173:61-76. doi: 10.1016/j.visres.2020.04.015. Epub 2020 May 29.
The ultimate goal of neuroscience is to explain how complex behaviour arises from neuronal activity. A comparable level of complexity also emerges in deep neural networks (DNNs) while exhibiting human-level performance in demanding visual tasks. Unlike in biological systems, all parameters and operations of DNNs are accessible. Therefore, in theory, it should be possible to decipher the exact mechanisms learnt by these artificial networks. Here, we investigate the concept of contrast invariance within the framework of DNNs. We start by discussing how a network can achieve robustness to changes in local and global image contrast. We used a technique from neuroscience-"kernel lesion"-to measure the degree of performance degradation when individual kernels are eliminated from a network. We further compared contrast normalisation, a mechanism used in biological systems, to the strategies that DNNs learn to cope with changes of contrast. The results of our analysis suggest that (i) contrast is a low-level feature for these networks, and it is encoded in the shallow layers; (ii) a handful of kernels appear to have a greater impact on this feature, and their removal causes a substantially larger accuracy loss for low-contrast images; (iii) edges are a distinct visual feature within the internal representation of object classification DNNs.
神经科学的最终目标是解释复杂的行为如何产生于神经元活动。深度神经网络 (DNN) 在表现出人类水平的复杂视觉任务性能的同时,也展现出了类似的复杂性。与生物系统不同,DNN 的所有参数和操作都是可访问的。因此,从理论上讲,应该有可能破译这些人工网络所学到的精确机制。在这里,我们在 DNN 框架内研究对比不变性的概念。我们首先讨论了网络如何实现对局部和全局图像对比度变化的鲁棒性。我们使用神经科学中的一种技术——“核损伤”,来衡量当从网络中消除单个核时性能下降的程度。我们进一步将对比归一化(一种在生物系统中使用的机制)与 DNN 学习应对对比度变化的策略进行了比较。我们的分析结果表明:(i)对比度对于这些网络来说是一个低级特征,它被编码在浅层中;(ii)少数几个核似乎对这个特征有更大的影响,它们的去除会导致低对比度图像的准确性大幅下降;(iii)边缘是物体分类 DNN 内部表示中的一个独特视觉特征。