Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, United Kingdom.
Department of Neuroscience, Brown University, Providence, Rhode Island, United States of America.
PLoS Comput Biol. 2024 Sep 11;20(9):e1012429. doi: 10.1371/journal.pcbi.1012429. eCollection 2024 Sep.
The field of computer vision has long drawn inspiration from neuroscientific studies of the human and non-human primate visual system. The development of convolutional neural networks (CNNs), for example, was informed by the properties of simple and complex cells in early visual cortex. However, the computational relevance of oscillatory dynamics experimentally observed in the visual system are typically not considered in artificial neural networks (ANNs). Computational models of neocortical dynamics, on the other hand, rarely take inspiration from computer vision. Here, we combine methods from computational neuroscience and machine learning to implement multiplexing in a simple ANN using oscillatory dynamics. We first trained the network to classify individually presented letters. Post-training, we added temporal dynamics to the hidden layer, introducing refraction in the hidden units as well as pulsed inhibition mimicking neuronal alpha oscillations. Without these dynamics, the trained network correctly classified individual letters but produced a mixed output when presented with two letters simultaneously, indicating a bottleneck problem. When introducing refraction and oscillatory inhibition, the output nodes corresponding to the two stimuli activate sequentially, ordered along the phase of the inhibitory oscillations. Our model implements the idea that inhibitory oscillations segregate competing inputs in time. The results of our simulations pave the way for applications in deeper network architectures and more complicated machine learning problems.
计算机视觉领域长期以来一直从人类和非人类灵长类动物视觉系统的神经科学研究中汲取灵感。例如,卷积神经网络 (CNN) 的发展受到早期视觉皮层中简单和复杂细胞特性的启发。然而,在人工神经网络 (ANN) 中通常不考虑在视觉系统中实验观察到的振荡动力学的计算相关性。另一方面,新皮层动力学的计算模型很少从计算机视觉中获得灵感。在这里,我们结合计算神经科学和机器学习的方法,使用振荡动力学在简单的 ANN 中实现复用。我们首先训练网络对单独呈现的字母进行分类。在训练后,我们将时间动态添加到隐藏层中,引入隐藏单元中的折射以及模拟神经元α振荡的脉冲抑制。没有这些动力学,经过训练的网络可以正确地对单个字母进行分类,但当同时呈现两个字母时,会产生混合输出,表明存在瓶颈问题。当引入折射和振荡抑制时,对应于两个刺激的输出节点会沿着抑制振荡的相位顺序依次激活。我们的模型实现了这样一种观点,即抑制振荡将竞争输入在时间上分开。我们的模拟结果为更深入的网络架构和更复杂的机器学习问题的应用铺平了道路。