Suppr超能文献

用于大规模神经网络模拟的大规模并行架构。

Massively parallel architectures for large scale neural network simulations.

作者信息

Fujimoto Y, Fukuda N, Akabane T

机构信息

Sharp Corp., Nara.

出版信息

IEEE Trans Neural Netw. 1992;3(6):876-88. doi: 10.1109/72.165590.

Abstract

A toroidal lattice architecture (TLA) and a planar lattice architecture (PLA) are proposed as massively parallel neurocomputer architectures for large-scale simulations. The performance of these architectures is almost proportional to the number of node processors, and they adopt the most efficient two-dimensional processor connections for WSI implementation. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large-scale neural network simulations. The general neuron model is defined. Implementation of the TLA with transputers is described. A Hopfield neural network and a multilayer perceptron have been implemented and applied to the traveling salesman problem and to identity mapping, respectively. Proof that the performance increases almost in proportion to the number of node processors is given.

摘要

提出了一种环形晶格架构(TLA)和一种平面晶格架构(PLA),作为用于大规模模拟的大规模并行神经计算机架构。这些架构的性能几乎与节点处理器的数量成正比,并且它们采用了用于晶圆级集成(WSI)实现的最有效的二维处理器连接方式。它们还为大规模神经网络模拟中的连接性问题、由数据传输瓶颈导致的性能下降问题以及高效并行处理的负载平衡问题提供了解决方案。定义了通用神经元模型。描述了用晶片机实现TLA的过程。已经实现了一个霍普菲尔德神经网络和一个多层感知器,并分别将它们应用于旅行商问题和恒等映射。给出了性能几乎与节点处理器数量成比例增加的证明。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验