IEEE Trans Neural Netw Learn Syst. 2015 Sep;26(9):1963-78. doi: 10.1109/TNNLS.2014.2362542. Epub 2014 Oct 22.
This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
本文介绍了一种基于事件驱动的前馈分类系统,该系统从时间对比度事件相关电位 (AER) 传感器获取数据。所提出的系统提取了类脑皮质的生物启发特征,并使用基于 AER 的 tempotron 分类器 (一个漏积分和放电尖峰神经元网络) 对不同模式进行区分。该系统最吸引人的特点之一是其事件驱动的处理方式,输入和特征都采用地址事件 (尖峰) 的形式。该系统在 AER 姿势数据集上进行了评估,并与两种最近开发的生物启发模型进行了比较。实验结果表明,它在保持可比性能的同时,消耗的模拟时间更少。此外,在混合国家标准与技术研究所 (MNIST) 图像数据集上的实验表明,所提出的系统不仅可以处理原始 AER 数据,还可以处理图像(通过预处理步骤将图像转换为 AER 事件),并且即使添加噪声,它也可以保持竞争力的准确性。该系统还在 MNIST 动态视觉传感器数据集上进行了评估(使用 AER 动态视觉传感器记录数据),测试准确率为 88.14%。