University of Central Florida, FL, USA.
Artif Life. 2012 Fall;18(4):331-63. doi: 10.1162/ARTL_a_00071. Epub 2012 Aug 31.
Intelligence in nature is the product of living brains, which are themselves the product of natural evolution. Although researchers in the field of neuroevolution (NE) attempt to recapitulate this process, artificial neural networks (ANNs) so far evolved through NE algorithms do not match the distinctive capabilities of biological brains. The recently introduced hypercube-based neuroevolution of augmenting topologies (HyperNEAT) approach narrowed this gap by demonstrating that the pattern of weights across the connectivity of an ANN can be generated as a function of its geometry, thereby allowing large ANNs to be evolved for high-dimensional problems. Yet the positions and number of the neurons connected through this approach must be decided a priori by the user and, unlike in living brains, cannot change during evolution. Evolvable-substrate HyperNEAT (ES-HyperNEAT), introduced in this article, addresses this limitation by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. ES-HyperNEAT is demonstrated through multi-task, maze navigation, and modular retina domains, revealing that the ANNs generated by this new approach assume natural properties such as neural topography and geometric regularity. Also importantly, ES-HyperNEAT's compact indirect encoding can be seeded to begin with a bias toward a desired class of ANN topographies, which facilitates the evolutionary search. The main conclusion is that ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover.
自然界中的智能是生物大脑的产物,而大脑本身又是自然进化的产物。虽然神经进化(NE)领域的研究人员试图重现这一过程,但通过 NE 算法进化而来的人工神经网络(ANNs)并不具备生物大脑的独特能力。最近引入的基于超立方体的拓扑结构增强神经进化(HyperNEAT)方法缩小了这一差距,证明了 ANN 连接中的权重模式可以作为其几何形状的函数生成,从而允许为高维问题进化大型 ANN。然而,通过这种方法连接的神经元的位置和数量必须由用户预先决定,并且与生物大脑不同,在进化过程中不能改变。本文引入的可进化基质 HyperNEAT(ES-HyperNEAT)解决了这一限制,它可以从 HyperNEAT 编码的权重模式中的隐含信息自动推断出节点的几何形状,从而避免了进化显式放置的需要。这种方法不仅可以进化网络中每个神经元的位置,还可以表示不同密度的区域,这意味着分辨率可以在进化过程中整体提高。通过多任务、迷宫导航和模块化视网膜领域对 ES-HyperNEAT 进行了演示,揭示了这种新方法生成的 ANN 具有自然属性,如神经拓扑和几何规则。同样重要的是,ES-HyperNEAT 的紧凑间接编码可以用种子开始,偏向于所需的 ANN 拓扑类别,这有助于进化搜索。主要结论是,ES-HyperNEAT 极大地扩展了进化可以发现的神经结构的范围。