Stenning Kilian D, Gartside Jack C, Manneschi Luca, Cheung Christopher T S, Chen Tony, Vanstone Alex, Love Jake, Holder Holly, Caravelli Francesco, Kurebayashi Hidekazu, Everschor-Sitte Karin, Vasilaki Eleni, Branford Will R
Blackett Laboratory, Imperial College London, London, SW7 2AZ, United Kingdom.
London Centre for Nanotechnology, Imperial College London, London, SW7 2AZ, United Kingdom.
Nat Commun. 2024 Aug 27;15(1):7377. doi: 10.1038/s41467-024-50633-1.
Physical neuromorphic computing, exploiting the complex dynamics of physical systems, has seen rapid advancements in sophistication and performance. Physical reservoir computing, a subset of neuromorphic computing, faces limitations due to its reliance on single systems. This constrains output dimensionality and dynamic range, limiting performance to a narrow range of tasks. Here, we engineer a suite of nanomagnetic array physical reservoirs and interconnect them in parallel and series to create a multilayer neural network architecture. The output of one reservoir is recorded, scaled and virtually fed as input to the next reservoir. This networked approach increases output dimensionality, internal dynamics and computational performance. We demonstrate that a physical neuromorphic system can achieve an overparameterised state, facilitating meta-learning on small training sets and yielding strong performance across a wide range of tasks. Our approach's efficacy is further demonstrated through few-shot learning, where the system rapidly adapts to new tasks.
利用物理系统复杂动力学的物理神经形态计算在复杂度和性能方面取得了快速进展。物理储层计算作为神经形态计算的一个子集,由于依赖单一系统而面临局限性。这限制了输出维度和动态范围,将性能限制在狭窄的任务范围内。在此,我们设计了一套纳米磁体阵列物理储层,并将它们并联和串联互连,以创建一个多层神经网络架构。一个储层的输出被记录、缩放并虚拟地作为输入馈送到下一个储层。这种网络化方法增加了输出维度、内部动力学和计算性能。我们证明,一个物理神经形态系统可以实现过参数化状态,便于在小训练集上进行元学习,并在广泛的任务中产生强大性能。我们方法的有效性通过少样本学习得到进一步证明,即系统能快速适应新任务。