Suppr超能文献

在多个硬件平台上进行大型神经网络模拟。

Large neural network simulations on multiple hardware platforms.

作者信息

Hammarlund P, Ekeberg O

机构信息

Studies of Artificial Neural Systems, Department of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, Sweden.

出版信息

J Comput Neurosci. 1998 Dec;5(4):443-59. doi: 10.1023/a:1008893429695.

Abstract

To efficiently simulate very large networks of interconnected neurons, particular consideration has to be given to the computer architecture being used. This article presents techniques for implementing simulators for large neural networks on a number of different computer architectures. The neuronal simulation task and the computer architectures of interest are first characterized, and the potential bottlenecks are highlighted. Then we describe the experience gained from adapting an existing simulator, SWIM, to two very different architectures-vector computers and multiprocessor workstations. This work lead to the implementation of a new simulation library, SPLIT, designed to allow efficient simulation of large networks on several architectures. Different computer architectures put different demands on the organization of both data structures and computations. Strict separation of such architecture considerations from the neuronal models and other simulation aspects makes it possible to construct both portable and extendible code.

摘要

为了高效地模拟由相互连接的神经元构成的非常大的网络,必须特别考虑所使用的计算机架构。本文介绍了在多种不同计算机架构上实现大型神经网络模拟器的技术。首先对神经元模拟任务和相关的计算机架构进行了描述,并突出了潜在的瓶颈。然后我们描述了将现有的模拟器SWIM适配到两种截然不同的架构——向量计算机和多处理器工作站所获得的经验。这项工作促成了一个新的模拟库SPLIT的实现,该库旨在允许在多种架构上对大型网络进行高效模拟。不同的计算机架构对数据结构和计算的组织有不同的要求。将此类架构方面的考虑与神经元模型及其他模拟方面严格分离,使得构建可移植且可扩展的代码成为可能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验