Suppr超能文献

时延神经网络:有限状态机的表示与归纳

Time-delay neural networks: representation and induction of finite-state machines.

作者信息

Clouse D S, Giles C L, Horne B G, Cottrell G W

机构信息

California Univ., San Diego, La Jolla, CA.

出版信息

IEEE Trans Neural Netw. 1997;8(5):1065-70. doi: 10.1109/72.623208.

Abstract

In this work, we characterize and contrast the capabilities of the general class of time-delay neural networks (TDNNs) with input delay neural networks (IDNNs), the subclass of TDNNs with delays limited to the inputs. Each class of networks is capable of representing the same set of languages, those embodied by the definite memory machines (DMMs), a subclass of finite-state machines. We demonstrate the close affinity between TDNNs and DMM languages by learning a very large DMM (2048 states) using only a few training examples. Even though both architectures are capable of representing the same class of languages, they have distinguishable learning biases. Intuition suggests that general TDNNs which include delays in hidden layers should perform well, compared to IDNNs, on problems in which the output can be expressed as a function on narrow input windows which repeat in time. On the other hand, these general TDNNs should perform poorly when the input windows are wide, or there is little repetition. We confirm these hypotheses via a set of simulations and statistical analysis.

摘要

在这项工作中,我们对时延神经网络(TDNN)这一通用类别与输入时延神经网络(IDNN,TDNN的一个子类,其延迟仅限于输入)的能力进行了表征和对比。每一类网络都能够表示相同的语言集,即由确定记忆机(DMM)所体现的语言集,DMM是有限状态机的一个子类。我们通过仅使用少量训练示例来学习一个非常大的DMM(2048个状态),展示了TDNN与DMM语言之间的紧密关联。尽管这两种架构都能够表示同一类语言,但它们具有可区分的学习偏差。直觉表明,与IDNN相比,包含隐藏层延迟的通用TDNN在输出可表示为在时间上重复的窄输入窗口上的函数的问题上应该表现良好。另一方面,当输入窗口很宽或几乎没有重复时,这些通用TDNN应该表现不佳。我们通过一组模拟和统计分析证实了这些假设。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验