Fujimoto Y, Fukuda N, Akabane T
Sharp Corp., Nara.
IEEE Trans Neural Netw. 1992;3(6):876-88. doi: 10.1109/72.165590.
A toroidal lattice architecture (TLA) and a planar lattice architecture (PLA) are proposed as massively parallel neurocomputer architectures for large-scale simulations. The performance of these architectures is almost proportional to the number of node processors, and they adopt the most efficient two-dimensional processor connections for WSI implementation. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large-scale neural network simulations. The general neuron model is defined. Implementation of the TLA with transputers is described. A Hopfield neural network and a multilayer perceptron have been implemented and applied to the traveling salesman problem and to identity mapping, respectively. Proof that the performance increases almost in proportion to the number of node processors is given.
提出了一种环形晶格架构(TLA)和一种平面晶格架构(PLA),作为用于大规模模拟的大规模并行神经计算机架构。这些架构的性能几乎与节点处理器的数量成正比,并且它们采用了用于晶圆级集成(WSI)实现的最有效的二维处理器连接方式。它们还为大规模神经网络模拟中的连接性问题、由数据传输瓶颈导致的性能下降问题以及高效并行处理的负载平衡问题提供了解决方案。定义了通用神经元模型。描述了用晶片机实现TLA的过程。已经实现了一个霍普菲尔德神经网络和一个多层感知器,并分别将它们应用于旅行商问题和恒等映射。给出了性能几乎与节点处理器数量成比例增加的证明。