Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.
Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States; Department of Psychology, Columbia University, New York, NY, United States; Department of Neuroscience, Columbia University, New York, NY, United States; Affiliated member, Electrical Engineering, Columbia University, New York, NY, United States.
Curr Opin Neurobiol. 2020 Dec;65:176-193. doi: 10.1016/j.conb.2020.11.009. Epub 2020 Dec 3.
Biological visual systems exhibit abundant recurrent connectivity. State-of-the-art neural network models for visual recognition, by contrast, rely heavily or exclusively on feedforward computation. Any finite-time recurrent neural network (RNN) can be unrolled along time to yield an equivalent feedforward neural network (FNN). This important insight suggests that computational neuroscientists may not need to engage recurrent computation, and that computer-vision engineers may be limiting themselves to a special case of FNN if they build recurrent models. Here we argue, to the contrary, that FNNs are a special case of RNNs and that computational neuroscientists and engineers should engage recurrence to understand how brains and machines can (1) achieve greater and more flexible computational depth (2) compress complex computations into limited hardware (3) integrate priors and priorities into visual inference through expectation and attention (4) exploit sequential dependencies in their data for better inference and prediction and (5) leverage the power of iterative computation.
生物视觉系统表现出丰富的递归连接。相比之下,用于视觉识别的最先进的神经网络模型严重依赖或完全依赖前馈计算。任何有限时间的递归神经网络 (RNN) 都可以沿着时间展开,得到一个等价的前馈神经网络 (FNN)。这一重要见解表明,计算神经科学家可能不需要进行递归计算,如果计算机视觉工程师构建递归模型,他们可能会将自己限制在 FNN 的一个特例中。在这里,我们认为相反,FNN 是 RNN 的一个特例,计算神经科学家和工程师应该采用递归方法来理解大脑和机器如何(1)实现更大和更灵活的计算深度(2)将复杂计算压缩到有限的硬件中(3)通过期望和注意将先验和优先级整合到视觉推理中(4)利用数据中的序列依赖性进行更好的推理和预测(5)利用迭代计算的力量。