Gosmann Jan, Eliasmith Chris
Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada.
Front Neuroinform. 2017 May 4;11:33. doi: 10.3389/fninf.2017.00033. eCollection 2017.
One critical factor limiting the size of neural cognitive models is the time required to simulate such models. To reduce simulation time, specialized hardware is often used. However, such hardware can be costly, not readily available, or require specialized software implementations that are difficult to maintain. Here, we present an algorithm that optimizes the computational graph of the Nengo neural network simulator, allowing simulations to run more quickly on commodity hardware. This is achieved by merging identical operations into single operations and restructuring the accessed data in larger blocks of sequential memory. In this way, a time speed-up of up to 6.8 is obtained. While this does not beat the specialized OpenCL implementation of Nengo, this optimization is available on any platform that can run Python. In contrast, the OpenCL implementation supports fewer platforms and can be difficult to install.
限制神经认知模型规模的一个关键因素是模拟此类模型所需的时间。为了减少模拟时间,通常会使用专门的硬件。然而,这种硬件可能成本高昂、不易获得,或者需要难以维护的专门软件实现。在此,我们提出一种算法,该算法可优化Nengo神经网络模拟器的计算图,使模拟能够在通用硬件上更快地运行。这是通过将相同操作合并为单个操作,并在更大的顺序内存块中重组所访问的数据来实现的。通过这种方式,可实现高达6.8倍的时间加速。虽然这无法超越Nengo专门的OpenCL实现,但这种优化在任何能够运行Python的平台上都可用。相比之下,OpenCL实现支持的平台较少,并且可能难以安装。