Suppr超能文献

符号深度学习网络:一种受心理学启发的轻量级高效深度学习方法。

Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.

机构信息

DCS Corp, Alexandria, VA.

Human Systems Integration Division (HSID), U.S. Army DEVCOM Data & Analysis Center (DAC).

出版信息

Top Cogn Sci. 2022 Oct;14(4):702-717. doi: 10.1111/tops.12571. Epub 2021 Oct 5.

Abstract

The last two decades have produced unprecedented successes in the fields of artificial intelligence and machine learning (ML), due almost entirely to advances in deep neural networks (DNNs). Deep hierarchical memory networks are not a novel concept in cognitive science and can be traced back more than a half century to Simon's early work on discrimination nets for simulating human expertise. The major difference between DNNs and the deep memory nets meant for explaining human cognition is that the latter are symbolic networks meant to model the dynamics of human memory and learning. Cognition-inspired symbolic deep networks (SDNs) address several known issues with DNNs, including (1) learning efficiency, where a much larger number of training examples are required for DNNs than would be expected for a human; (2) catastrophic interference, where what is learned by a DNN gets unlearned when a new problem is presented; and (3) explainability, where there is no way to explain what is learned by a DNN. This paper explores whether SDNs can achieve similar classification accuracy performance to DNNs across several popular ML datasets and discusses the strengths and weaknesses of each approach. Simulations reveal that (1) SDNs provide similar accuracy to DNNs in most cases, (2) SDNs are far more efficient than DNNs, (3) SDNs are as robust as DNNs to irrelevant/noisy attributes in the data, and (4) SDNs are far more robust to catastrophic interference than DNNs. We conclude that SDNs offer a promising path toward human-level accuracy and efficiency in category learning. More generally, ML frameworks could stand to benefit from cognitively inspired approaches, borrowing more features and functionality from models meant to simulate and explain human learning.

摘要

在过去的二十年中,由于深度学习神经网络 (DNN) 的进步,人工智能和机器学习 (ML) 领域取得了前所未有的成功。深度层次记忆网络在认知科学中并不是一个新概念,可以追溯到半个多世纪前西蒙 (Simon) 早期关于用于模拟人类专业知识的判别网络的工作。DNN 与旨在解释人类认知的深度记忆网络的主要区别在于,后者是符号网络,旨在模拟人类记忆和学习的动态。受认知启发的符号深度网络 (SDN) 解决了 DNN 的几个已知问题,包括 (1) 学习效率,DNN 需要比人类预期更多的训练示例;(2) 灾难性干扰,当呈现新问题时,DNN 所学的内容会被遗忘;以及 (3) 可解释性,DNN 所学的内容无法解释。本文探讨了 SDN 是否可以在几个流行的 ML 数据集上实现与 DNN 类似的分类准确性性能,并讨论了每种方法的优缺点。模拟结果表明:(1) 在大多数情况下,SDN 提供与 DNN 相似的准确性;(2) SDN 比 DNN 高效得多;(3) SDN 对数据中不相关/嘈杂的属性与 DNN 一样稳健;(4) SDN 比 DNN 对灾难性干扰更稳健。我们得出结论,SDN 为实现类别学习的人类水平准确性和效率提供了一条有前途的途径。更一般地说,ML 框架可以从受认知启发的方法中受益,从旨在模拟和解释人类学习的模型中借鉴更多的功能和功能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验