Rashvand Parvaneh, Ahmadzadeh Mohammad Reza, Shayegh Farzaneh
Digital Signal Processing Research Lab, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran.
Int J Neural Syst. 2021 Mar;31(3):2050073. doi: 10.1142/S0129065720500732. Epub 2020 Dec 22.
In contrast to the previous artificial neural networks (ANNs), spiking neural networks (SNNs) work based on temporal coding approaches. In the proposed SNN, the number of neurons, neuron models, encoding method, and learning algorithm design are described in a correct and pellucid fashion. It is also discussed that optimizing the SNN parameters based on physiology, and maximizing the information they pass leads to a more robust network. In this paper, inspired by the "center-surround" structure of the receptive fields in the retina, and the amount of overlap that they have, a robust SNN is implemented. It is based on the Integrate-and-Fire (IF) neuron model and uses the time-to-first-spike coding to train the network by a newly proposed method. The Iris and MNIST datasets were employed to evaluate the performance of the proposed network whose accuracy, with 60 input neurons, was 96.33% on the Iris dataset. The network was trained in only 45 iterations indicating its reasonable convergence rate. For the MNIST dataset, when the gray level of each pixel was considered as input to the network, 600 input neurons were required, and the accuracy of the network was 90.5%. Next, 14 structural features were used as input. Therefore, the number of input neurons decreased to 210, and accuracy increased up to 95%, meaning that an SNN with fewer input neurons and good skill was implemented. Also, the ABIDE1 dataset is applied to the proposed SNN. Of the 184 data, 79 are used for healthy people and 105 for people with autism. One of the characteristics that can differentiate between these two classes is the entropy of the existing data. Therefore, Shannon entropy is used for feature extraction. Applying these values to the proposed SNN, an accuracy of 84.42% was achieved by only 120 iterations, which is a good result compared to the recent results.
与先前的人工神经网络(ANN)不同,脉冲神经网络(SNN)基于时间编码方法工作。在所提出的SNN中,神经元数量、神经元模型、编码方法和学习算法设计都以正确且清晰的方式进行了描述。还讨论了基于生理学优化SNN参数,并最大化它们传递的信息会导致网络更稳健。在本文中,受视网膜感受野的“中心 - 周边”结构及其重叠量的启发,实现了一个稳健的SNN。它基于积分发放(IF)神经元模型,并使用首次发放时间编码通过一种新提出的方法来训练网络。使用鸢尾花数据集和MNIST数据集来评估所提出网络的性能,在鸢尾花数据集上,该网络有60个输入神经元时,准确率为96.33%。该网络仅经过45次迭代就完成了训练,表明其收敛速度合理。对于MNIST数据集,当将每个像素的灰度级作为网络输入时,需要600个输入神经元,网络准确率为90.5%。接下来,使用14个结构特征作为输入。因此,输入神经元数量减少到210个,准确率提高到95%,这意味着实现了一个具有较少输入神经元且性能良好的SNN。此外,将ABIDE1数据集应用于所提出的SNN。在184个数据中,79个用于健康人,105个用于自闭症患者。能够区分这两类的特征之一是现有数据的熵。因此,使用香农熵进行特征提取。将这些值应用于所提出的SNN,仅经过120次迭代就达到了84.42%的准确率,与近期结果相比,这是一个不错的结果。