Mauduit N, Duranton M, Gobert J, Sirat J A
Dept. of Electr. and Comput. Eng., California Univ., San Diego, La Jolla, CA.
IEEE Trans Neural Netw. 1992;3(3):414-22. doi: 10.1109/72.129414.
Neural network simulations on a parallel architecture are reported. The architecture is scalable and flexible enough to be useful for simulating various kinds of networks and paradigms. The computing device is based on an existing coarse-grain parallel framework (INMOS transputers), improved with finer-grain parallel abilities through VLSI chips, and is called the Lneuro 1.0 (for LEP neuromimetic) circuit. The modular architecture of the circuit makes it possible to build various kinds of boards to match the expected range of applications or to increase the power of the system by adding more hardware. The resulting machine remains reconfigurable to accommodate a specific problem to some extent. A small-scale machine has been realized using 16 Lneuros, to experimentally test the behavior of this architecture. Results are presented on an integer version of Kohonen feature maps. The speedup factor increases regularly with the number of clusters involved (to a factor of 80). Some ways to improve this family of neural network simulation machines are also investigated.
本文报道了在并行架构上进行的神经网络模拟。该架构具有足够的可扩展性和灵活性,可用于模拟各种类型的网络和范式。计算设备基于现有的粗粒度并行框架(INMOS 晶片机),通过 VLSI 芯片增强了细粒度并行能力,被称为 Lneuro 1.0(用于 LEP 神经拟态)电路。该电路的模块化架构使得可以构建各种类型的板卡,以匹配预期的应用范围,或者通过添加更多硬件来提高系统性能。最终的机器在一定程度上仍可重新配置以适应特定问题。已使用 16 个 Lneuros 实现了一台小规模机器,用于实验测试该架构的性能。给出了关于 Kohonen 特征图整数版本的结果。加速因子随着所涉及簇的数量有规律地增加(达到 80 倍)。还研究了一些改进这类神经网络模拟机器的方法。