School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, 47906.
Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, Indiana, 47906.
Hum Brain Mapp. 2018 May;39(5):2269-2282. doi: 10.1002/hbm.24006. Epub 2018 Feb 12.
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision.
人类视觉皮层提取空间和时间视觉特征,以支持感知并指导行为。深度卷积神经网络 (CNN) 为空间视觉处理提供了一种计算框架来模拟皮质表示和组织,但无法解释大脑如何处理时间信息。为了克服这一限制,我们通过向 CNN 的不同层添加递归连接来扩展 CNN,以允许空间表示随时间被记住和积累。扩展模型或递归神经网络 (RNN) 将作为视觉处理的一个组成部分的过程记忆的分层和分布式模型具体化。与 CNN 不同,RNN 从视频中学习时空特征,以实现动作识别。RNN 在所有视觉区域(尤其是沿背侧流的区域)的自然电影刺激的皮质反应预测上优于 CNN。作为视觉处理的全可观测模型,RNN 还揭示了皮质时间感受野、过程记忆的动力学和时空表示的层次结构。这些结果支持过程记忆的假设,并证明了使用 RNN 对动态自然视觉进行深入计算理解的潜力。