Suppr超能文献

Linear function neurons: structure and training.

作者信息

Hampson S E, Volper D J

出版信息

Biol Cybern. 1986;53(4):203-17. doi: 10.1007/BF00336991.

Abstract

Three different representations for a thresholded linear equation are developed. For binary input they are shown to be representationally equivalent though their training characteristics differ. A training algorithm for linear equations is discussed. The similarities between its simplest mathematical representation (perceptron training), a formal model of animal learning (Rescorla-Wagner learning), and one mechanism of neural learning (Aplysia gill withdrawal) are pointed out. For d input features, perceptron training is shown to have a lower bound of 2d and an upper bound of dd adjusts. It is possible that the true upper bound is 4d, though this has not been proved. Average performance is shown to have a lower bound of 1.4d. Learning time is shown to increase linearly with the number of irrelevant or replicated features. The (X of N) function (a subset of linearly separable functions containing OR and AND) is shown to be learnable in d3 time. A method of utilizing conditional probability to accelerate learning is proposed. This reduces the observed growth rate from 4d to the theoretical minimum (for unmodified version) of 2d. A different version reduces the growth rate to about 1.7d. The linear effect of irrelevant features can also be eliminated. Whether such an approach can be made probably convergent is not known.

摘要

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验