Department of Electrical Engineering, Yale University, New Haven, CT, USA.
Sci Rep. 2021 Sep 24;11(1):19037. doi: 10.1038/s41598-021-98448-0.
By emulating biological features in brain, Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning. To make SNNs ubiquitous, a 'visual explanation' technique for analysing and explaining the internal spike behavior of such temporal deep SNNs is crucial. Explaining SNNs visually will make the network more transparent giving the end-user a tool to understand how SNNs make temporal predictions and why they make a certain decision. In this paper, we propose a bio-plausible visual explanation tool for SNNs, called Spike Activation Map (SAM). SAM yields a heatmap (i.e., localization map) corresponding to each time-step of input data by highlighting neurons with short inter-spike interval activity. Interestingly, without the use of gradients and ground truth, SAM produces a temporal localization map highlighting the region of interest in an image attributed to an SNN's prediction at each time-step. Overall, SAM outsets the beginning of a new research area 'explainable neuromorphic computing' that will ultimately allow end-users to establish appropriate trust in predictions from SNNs.
通过模拟大脑中的生物特征,尖峰神经网络(SNN)为传统深度学习提供了一种节能的替代方案。为了使 SNN 无处不在,一种用于分析和解释这种时间深度 SNN 内部尖峰行为的“视觉解释”技术至关重要。对 SNN 进行可视化解释将使网络更加透明,为最终用户提供一种工具,以了解 SNN 如何进行时间预测以及它们为何做出特定决策。在本文中,我们提出了一种用于 SNN 的生物逼真的可视化解释工具,称为 Spike Activation Map(SAM)。SAM 通过突出具有短尖峰间隔活动的神经元,为输入数据的每个时间步生成一个热图(即定位图)。有趣的是,无需使用梯度和真实值,SAM 就可以生成一个时间定位图,突出显示图像中与 SNN 在每个时间步的预测相关的感兴趣区域。总体而言,SAM 开创了一个新的研究领域“可解释的神经形态计算”的先河,最终将允许最终用户对 SNN 的预测建立适当的信任。