Hammarlund P, Ekeberg O
Studies of Artificial Neural Systems, Department of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, Sweden.
J Comput Neurosci. 1998 Dec;5(4):443-59. doi: 10.1023/a:1008893429695.
To efficiently simulate very large networks of interconnected neurons, particular consideration has to be given to the computer architecture being used. This article presents techniques for implementing simulators for large neural networks on a number of different computer architectures. The neuronal simulation task and the computer architectures of interest are first characterized, and the potential bottlenecks are highlighted. Then we describe the experience gained from adapting an existing simulator, SWIM, to two very different architectures-vector computers and multiprocessor workstations. This work lead to the implementation of a new simulation library, SPLIT, designed to allow efficient simulation of large networks on several architectures. Different computer architectures put different demands on the organization of both data structures and computations. Strict separation of such architecture considerations from the neuronal models and other simulation aspects makes it possible to construct both portable and extendible code.
为了高效地模拟由相互连接的神经元构成的非常大的网络,必须特别考虑所使用的计算机架构。本文介绍了在多种不同计算机架构上实现大型神经网络模拟器的技术。首先对神经元模拟任务和相关的计算机架构进行了描述,并突出了潜在的瓶颈。然后我们描述了将现有的模拟器SWIM适配到两种截然不同的架构——向量计算机和多处理器工作站所获得的经验。这项工作促成了一个新的模拟库SPLIT的实现,该库旨在允许在多种架构上对大型网络进行高效模拟。不同的计算机架构对数据结构和计算的组织有不同的要求。将此类架构方面的考虑与神经元模型及其他模拟方面严格分离,使得构建可移植且可扩展的代码成为可能。