Grossberg Stephen
Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Center for Adaptive Systems, Boston University, Boston, MA, United States.
Front Neurorobot. 2020 Jun 25;14:36. doi: 10.3389/fnbot.2020.00036. eCollection 2020.
Biological neural network models whereby brains make minds help to understand autonomous adaptive intelligence. This article summarizes why the dynamics and emergent properties of such models for perception, cognition, emotion, and action are explainable, and thus amenable to being confidently implemented in large-scale applications. Key to their explainability is how these models combine fast activations, or short-term memory (STM) traces, and learned weights, or long-term memory (LTM) traces. Visual and auditory perceptual models have explainable conscious STM representations of visual surfaces and auditory streams in surface-shroud resonances and stream-shroud resonances, respectively. Deep Learning is often used to classify data. However, Deep Learning can experience catastrophic forgetting: At any stage of learning, an unpredictable part of its memory can collapse. Even if it makes some accurate classifications, they are not explainable and thus cannot be used with confidence. Deep Learning shares these problems with the back propagation algorithm, whose computational problems due to non-local weight transport during mismatch learning were described in the 1980s. Deep Learning became popular after very fast computers and huge online databases became available that enabled new applications despite these problems. Adaptive Resonance Theory, or ART, algorithms overcome the computational problems of back propagation and Deep Learning. ART is a self-organizing production system that incrementally learns, using arbitrary combinations of unsupervised and supervised learning and only locally computable quantities, to rapidly classify large non-stationary databases without experiencing catastrophic forgetting. ART classifications and predictions are explainable using the attended critical feature patterns in STM on which they build. The LTM adaptive weights of the fuzzy ARTMAP algorithm induce fuzzy IF-THEN rules that explain what feature combinations predict successful outcomes. ART has been successfully used in multiple large-scale real world applications, including remote sensing, medical database prediction, and social media data clustering. Also explainable are the MOTIVATOR model of reinforcement learning and cognitive-emotional interactions, and the VITE, DIRECT, DIVA, and SOVEREIGN models for reaching, speech production, spatial navigation, and autonomous adaptive intelligence. These biological models exemplify complementary computing, and use local laws for match learning and mismatch learning that avoid the problems of Deep Learning.
大脑产生思维的生物神经网络模型有助于理解自主自适应智能。本文总结了为何此类用于感知、认知、情感和行动的模型的动力学和涌现特性是可解释的,因此适合在大规模应用中可靠地实现。其可解释性的关键在于这些模型如何结合快速激活,即短期记忆(STM)痕迹,以及学习到的权重,即长期记忆(LTM)痕迹。视觉和听觉感知模型分别在表面笼罩共振和流笼罩共振中具有可解释的视觉表面和听觉流的有意识STM表征。深度学习常用于数据分类。然而,深度学习可能会经历灾难性遗忘:在学习的任何阶段,其记忆中不可预测的部分可能会崩溃。即使它做出了一些准确的分类,这些分类也是不可解释的,因此不能放心使用。深度学习与反向传播算法都存在这些问题,其在失配学习期间由于非局部权重传输导致的计算问题在20世纪80年代就已被描述。在非常快速的计算机和巨大的在线数据库出现后,深度学习变得流行起来,尽管存在这些问题,但它们使得新的应用成为可能。自适应共振理论(ART)算法克服了反向传播和深度学习的计算问题。ART是一种自组织生产系统,它使用无监督和监督学习的任意组合以及仅局部可计算的量进行增量学习,以快速对大型非平稳数据库进行分类而不会经历灾难性遗忘。ART分类和预测可以通过它们所基于的STM中参与的关键特征模式来解释。模糊ARTMAP算法的LTM自适应权重会诱导模糊的IF - THEN规则,这些规则解释了哪些特征组合预测了成功的结果。ART已成功应用于多个大规模实际应用中,包括遥感、医学数据库预测和社交媒体数据聚类。强化学习和认知 - 情感交互的MOTIVATOR模型,以及用于伸手、语音产生、空间导航和自主自适应智能的VITE、DIRECT、DIVA和SOVEREIGN模型也是可解释的。这些生物模型体现了互补计算,并使用局部法则进行匹配学习和失配学习,从而避免了深度学习的问题。