Zhejiang University, Hangzhou, 310027, China.
National University of Singapore, 119077, Singapore.
Neural Netw. 2024 May;173:106172. doi: 10.1016/j.neunet.2024.106172. Epub 2024 Feb 16.
Spiking neural networks (SNNs) are brain-inspired models that utilize discrete and sparse spikes to transmit information, thus having the property of energy efficiency. Recent advances in learning algorithms have greatly improved SNN performance due to the automation of feature engineering. While the choice of neural architecture plays a significant role in deep learning, the current SNN architectures are mainly designed manually, which is a time-consuming and error-prone process. In this paper, we propose a spiking neural architecture search (NAS) method that can automatically find efficient SNNs. To tackle the challenge of long search time faced by SNNs when utilizing NAS, the proposed NAS encodes candidate architectures in a branchless spiking supernet which significantly reduces the computation requirements in the search process. Considering that real-world tasks prefer efficient networks with optimal accuracy under a limited computational budget, we propose a Synaptic Operation (SynOps)-aware optimization to automatically find the computationally efficient subspace of the supernet. Experimental results show that, in less search time, our proposed NAS can find SNNs with higher accuracy and lower computational cost than state-of-the-art SNNs. We also conduct experiments to validate the search process and the trade-off between accuracy and computational cost.
尖峰神经网络(SNN)是一种受大脑启发的模型,利用离散和稀疏的尖峰来传输信息,因此具有节能的特性。由于特征工程的自动化,学习算法的最新进展极大地提高了 SNN 的性能。虽然神经架构的选择在深度学习中起着重要的作用,但目前的 SNN 架构主要是手动设计的,这是一个耗时且容易出错的过程。在本文中,我们提出了一种尖峰神经网络搜索(NAS)方法,可以自动找到高效的 SNN。为了解决 SNN 在利用 NAS 时面临的搜索时间长的挑战,所提出的 NAS 以无分支尖峰超网的形式对候选架构进行编码,这大大减少了搜索过程中的计算需求。考虑到现实世界的任务更喜欢在有限的计算预算下具有最佳准确性的高效网络,我们提出了一种 Synaptic Operation(SynOps)感知优化,以自动找到超网的计算高效子空间。实验结果表明,在较短的搜索时间内,我们提出的 NAS 可以找到比最先进的 SNN 具有更高准确性和更低计算成本的 SNN。我们还进行了实验来验证搜索过程以及准确性和计算成本之间的权衡。