• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从在混沌符号序列上训练的循环神经网络中提取有限状态表示。

Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences.

作者信息

Tino P, Köteles M

机构信息

Department of Computer Science and Engineering, Slovak Technical University, Ilkovicova 3, 812 19 Bratislava, Slovakia.

出版信息

IEEE Trans Neural Netw. 1999;10(2):284-302. doi: 10.1109/72.750555.

DOI:10.1109/72.750555
PMID:18252527
Abstract

While much work has been done in neural-based modeling of real-valued chaotic time series, little effort has been devoted to address similar problems in the symbolic domain. We investigate the knowledge induction process associated with training recurrent neural networks (RNN's) on single long chaotic symbolic sequences. Even though training RNN's to predict the next symbol leaves the standard performance measures such as the mean square error on the network output virtually unchanged, the networks nevertheless do extract a lot of knowledge. We monitor the knowledge extraction process by considering the networks stochastic sources and letting them generate sequences which are then confronted with the training sequence via information theoretic entropy and cross-entropy measures. We also study the possibility of reformulating the knowledge gained by RNN's in a compact and easy-to-analyze form of finite-state stochastic machines. The experiments are performed on two sequences with different "complexities" measured by the size and state transition structure of the induced Crutchfield's epsilon-machines. We find that, with respect to the original RNN's, the extracted machines can achieve comparable or even better entropy and cross-entropy performance. Moreover, RNN's reflect the training sequence complexity in their dynamical state representations that can in turn be reformulated using finite-state means. Our findings are confirmed by a much more detailed analysis of model generated sequences through the statistical mechanical metaphor of entropy spectra. We also introduce a visual representation of allowed block structure in the studied sequences that, besides having nice theoretical properties, allows on the topological level for an illustrative insight into both RNN training and finite-state stochastic machine extraction processes.

摘要

虽然在基于神经网络的实值混沌时间序列建模方面已经做了很多工作,但在符号领域解决类似问题的努力却很少。我们研究了与在单个长混沌符号序列上训练递归神经网络(RNN)相关的知识归纳过程。尽管训练RNN来预测下一个符号会使诸如网络输出上的均方误差等标准性能指标几乎保持不变,但网络仍然能够提取大量知识。我们通过考虑网络的随机源并让它们生成序列来监测知识提取过程,然后通过信息论熵和交叉熵度量将这些序列与训练序列进行对比。我们还研究了以有限状态随机机的紧凑且易于分析的形式重新表述RNN所获得知识的可能性。实验是在两个具有不同“复杂度”的序列上进行的,这两个序列的复杂度是通过诱导的克鲁奇菲尔德ε - 机的大小和状态转移结构来衡量的。我们发现,相对于原始的RNN,提取的机器能够实现相当甚至更好的熵和交叉熵性能。此外,RNN在其动态状态表示中反映了训练序列的复杂度,而这种表示又可以通过有限状态方法进行重新表述。通过熵谱的统计力学隐喻对模型生成序列进行更详细的分析,证实了我们的发现。我们还引入了所研究序列中允许的块结构的可视化表示,它除了具有良好的理论特性外,还能在拓扑层面上对RNN训练和有限状态随机机提取过程提供直观的洞察。

相似文献

1
Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences.从在混沌符号序列上训练的循环神经网络中提取有限状态表示。
IEEE Trans Neural Netw. 1999;10(2):284-302. doi: 10.1109/72.750555.
2
State-Regularized Recurrent Neural Networks to Extract Automata and Explain Predictions.基于状态正则化循环神经网络的自动机提取与预测解释。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7739-7750. doi: 10.1109/TPAMI.2022.3225334. Epub 2023 May 5.
3
Noisy recurrent neural networks: the continuous-time case.
IEEE Trans Neural Netw. 1998;9(5):913-36. doi: 10.1109/72.712164.
4
Emergence of belief-like representations through reinforcement learning.通过强化学习产生类似信念的表征。
bioRxiv. 2023 Apr 4:2023.04.04.535512. doi: 10.1101/2023.04.04.535512.
5
Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation.兴奋性抑制性递归神经网络中的刺激驱动和自发动力学用于序列表示。
Neural Comput. 2021 Sep 16;33(10):2603-2645. doi: 10.1162/neco_a_01418.
6
Adding learning to cellular genetic algorithms for training recurrent neural networks.
IEEE Trans Neural Netw. 1999;10(2):239-52. doi: 10.1109/72.750546.
7
Markovian architectural bias of recurrent neural networks.循环神经网络的马尔可夫架构偏差
IEEE Trans Neural Netw. 2004 Jan;15(1):6-15. doi: 10.1109/TNN.2003.820839.
8
A machine learning method for extracting symbolic knowledge from recurrent neural networks.一种从循环神经网络中提取符号知识的机器学习方法。
Neural Comput. 2004 Jan;16(1):59-71. doi: 10.1162/08997660460733994.
9
Considerations in using recurrent neural networks to probe neural dynamics.使用循环神经网络探究神经动力学的注意事项。
J Neurophysiol. 2019 Dec 1;122(6):2504-2521. doi: 10.1152/jn.00467.2018. Epub 2019 Oct 16.
10
Existence and learning of oscillations in recurrent neural networks.
IEEE Trans Neural Netw. 2000;11(1):205-14. doi: 10.1109/72.822523.

引用本文的文献

1
A new method for inferring hidden markov models from noisy time sequences.一种从噪声时间序列中推断隐马尔可夫模型的新方法。
PLoS One. 2012;7(1):e29703. doi: 10.1371/journal.pone.0029703. Epub 2012 Jan 11.