Florini Davide, Gandolfi Daniela, Mapelli Jonathan, Benatti Lorenzo, Pavan Paolo, Puglisi Francesco Maria
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):5117-5129. doi: 10.1109/TNNLS.2022.3202501. Epub 2024 Apr 4.
Artificial intelligence (AI) is changing the way computing is performed to cope with real-world, ill-defined tasks for which traditional algorithms fail. AI requires significant memory access, thus running into the von Neumann bottleneck when implemented in standard computing platforms. In this respect, low-latency energy-efficient in-memory computing can be achieved by exploiting emerging memristive devices, given their ability to emulate synaptic plasticity, which provides a path to design large-scale brain-inspired spiking neural networks (SNNs). Several plasticity rules have been described in the brain and their coexistence in the same network largely expands the computational capabilities of a given circuit. In this work, starting from the electrical characterization and modeling of the memristor device, we propose a neuro-synaptic architecture that co-integrates in a unique platform with a single type of synaptic device to implement two distinct learning rules, namely, the spike-timing-dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM). This architecture, by exploiting the aforementioned learning rules, successfully addressed two different tasks of unsupervised learning.
人工智能(AI)正在改变计算的执行方式,以应对传统算法无法处理的现实世界中定义不明确的任务。人工智能需要大量的内存访问,因此在标准计算平台上实现时会遇到冯·诺依曼瓶颈。在这方面,利用新兴的忆阻器设备可以实现低延迟、高能效的内存计算,因为它们能够模拟突触可塑性,这为设计大规模受大脑启发的脉冲神经网络(SNN)提供了一条途径。大脑中已经描述了几种可塑性规则,它们在同一网络中的共存极大地扩展了给定电路的计算能力。在这项工作中,我们从忆阻器器件的电学特性和建模出发,提出了一种神经突触架构,该架构在一个独特的平台上与单一类型的突触器件共同集成,以实现两种不同的学习规则,即脉冲时间依赖可塑性(STDP)和比恩斯托克 - 库珀 - 蒙罗(BCM)规则。通过利用上述学习规则,该架构成功解决了无监督学习的两个不同任务。