Kim Youngeun, Kahana Adar, Yin Ruokai, Li Yuhang, Stinis Panos, Karniadakis George Em, Panda Priyadarshini
Department of Electrical Engineering, Yale University, New Haven, CT, United States.
Division of Applied Mathematics, Brown University, Providence, RI, United States.
Front Neurosci. 2024 Feb 14;18:1346805. doi: 10.3389/fnins.2024.1346805. eCollection 2024.
Time-To-First-Spike (TTFS) coding in Spiking Neural Networks (SNNs) offers significant advantages in terms of energy efficiency, closely mimicking the behavior of biological neurons. In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding. Our focus is on two distinct types of skip connection architectures: (1) addition-based skip connections, and (2) concatenation-based skip connections. We find that addition-based skip connections introduce an additional delay in terms of spike timing. On the other hand, concatenation-based skip connections circumvent this delay but produce time gaps between after-convolution and skip connection paths, thereby restricting the effective mixing of information from these two paths. To mitigate these issues, we propose a novel approach involving a learnable delay for skip connections in the concatenation-based skip connection architecture. This approach successfully bridges the time gap between the convolutional and skip branches, facilitating improved information mixing. We conduct experiments on public datasets including MNIST and Fashion-MNIST, illustrating the advantage of the skip connection in TTFS coding architectures. Additionally, we demonstrate the applicability of TTFS coding on beyond image recognition tasks and extend it to scientific machine-learning tasks, broadening the potential uses of SNNs.
脉冲神经网络(SNNs)中的首次脉冲时间(TTFS)编码在能量效率方面具有显著优势,它紧密模仿生物神经元的行为。在这项工作中,我们深入研究了跳跃连接(人工神经网络(ANNs)中广泛使用的概念)在具有TTFS编码的SNNs领域中的作用。我们关注两种不同类型的跳跃连接架构:(1)基于加法的跳跃连接,以及(2)基于拼接的跳跃连接。我们发现基于加法的跳跃连接在脉冲定时方面引入了额外的延迟。另一方面,基于拼接的跳跃连接规避了这种延迟,但在卷积后路径和跳跃连接路径之间产生了时间间隙,从而限制了这两条路径信息的有效混合。为了缓解这些问题,我们提出了一种新颖的方法,即在基于拼接的跳跃连接架构中为跳跃连接引入可学习的延迟。这种方法成功地弥合了卷积分支和跳跃分支之间的时间间隙,促进了更好的信息混合。我们在包括MNIST和Fashion-MNIST在内的公共数据集上进行了实验,展示了跳跃连接在TTFS编码架构中的优势。此外,我们证明了TTFS编码在图像识别任务之外的适用性,并将其扩展到科学机器学习任务,拓宽了SNNs的潜在用途。