Végh János
Kalimános BT, Komlóssy u 26, Debrecen, 4032, Hungary.
Brain Inform. 2019 Apr 11;6(1):4. doi: 10.1186/s40708-019-0097-2.
With both knowing more and more details about how neurons and complex neural networks work and having serious demand for making performable huge artificial networks, more and more efforts are devoted to build both hardware and/or software simulators and supercomputers targeting artificial intelligence applications, demanding an exponentially increasing amount of computing capacity. However, the inherently parallel operation of the neural networks is mostly simulated deploying inherently sequential (or in the best case: sequential-parallel) computing elements. The paper shows that neural network simulators, (both software and hardware ones), akin to all other sequential-parallel computing systems, have computing performance limitation due to deploying clock-driven electronic circuits, the 70-year old computing paradigm and Amdahl's Law about parallelized computing systems. The findings explain the limitations/saturation experienced in former studies.
随着人们对神经元和复杂神经网络的工作方式了解得越来越详细,并且对构建可运行的大型人工网络有迫切需求,越来越多的努力致力于开发针对人工智能应用的硬件和/或软件模拟器以及超级计算机,这需要指数级增长的计算能力。然而,神经网络固有的并行操作大多是通过部署本质上是顺序(或者在最佳情况下:顺序-并行)的计算元件来模拟的。本文表明,与所有其他顺序-并行计算系统类似,神经网络模拟器(包括软件和硬件模拟器)由于采用时钟驱动的电子电路、有70年历史的计算范式以及关于并行计算系统的阿姆达尔定律,存在计算性能限制。这些发现解释了先前研究中所经历的局限性/饱和现象。