Suppr超能文献

一种基于脑启发式记忆转换的可微神经计算机,用于基于推理的问答。

A brain-inspired memory transformation based differentiable neural computer for reasoning-based question answering.

作者信息

Liang Yao, Wang Yuwei, Fang Hongjian, Zhao Feifei, Zeng Yi

机构信息

Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China.

School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.

出版信息

Front Artif Intell. 2025 Aug 14;8:1635932. doi: 10.3389/frai.2025.1635932. eCollection 2025.

Abstract

Reasoning and question answering, as fundamental cognitive functions in humans, remain significant hurdles for artificial intelligence. While large language models (LLMs) have achieved notable success, integrating explicit memory with structured reasoning capabilities remains a persistent difficulty. The Differentiable Neural Computer (DNC) model, despite addressing these issues to some extent, still faces challenges such as algorithmic complexity, slow convergence, and limited robustness. Inspired by the brain's learning and memory mechanisms, this paper proposes a Memory Transformation based Differentiable Neural Computer (MT-DNC) model. The MT-DNC integrates two brain-inspired memory modules-a working memory module inspired by the cognitive system that temporarily holds and processes task-relevant information, and a long-term memory module that stores frequently accessed and enduring information-within the DNC framework, enabling the autonomous transformation of acquired experiences between these memory systems. This facilitates efficient knowledge extraction and enhances reasoning capabilities. Experimental results on the bAbI question answering task demonstrate that the proposed method outperforms existing Deep Neural Network (DNN) and DNC models, achieving faster convergence and superior performance. Ablation studies further confirm that the transformation of memory from working memory to long-term memory is critical for improving the robustness and stability of reasoning. This work offers new insights into incorporating brain-inspired memory mechanisms into dialogue and reasoning systems.

摘要

推理和问答作为人类的基本认知功能,仍然是人工智能面临的重大障碍。虽然大语言模型(LLMs)已经取得了显著成功,但将显式记忆与结构化推理能力相结合仍然是一个长期存在的难题。可微神经计算机(DNC)模型尽管在一定程度上解决了这些问题,但仍面临算法复杂性、收敛速度慢和鲁棒性有限等挑战。受大脑学习和记忆机制的启发,本文提出了一种基于记忆转换的可微神经计算机(MT-DNC)模型。MT-DNC在DNC框架内集成了两个受大脑启发的记忆模块——一个受认知系统启发的工作记忆模块,用于临时保存和处理与任务相关的信息,以及一个存储频繁访问和持久信息的长期记忆模块,从而实现所获取经验在这些记忆系统之间的自主转换。这有助于高效地提取知识并增强推理能力。在bAbI问答任务上的实验结果表明,所提出的方法优于现有的深度神经网络(DNN)和DNC模型,实现了更快的收敛和更优的性能。消融研究进一步证实,记忆从工作记忆到长期记忆的转换对于提高推理的鲁棒性和稳定性至关重要。这项工作为将受大脑启发的记忆机制纳入对话和推理系统提供了新的见解。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验