Ieng Sio-Hoi, Lehtonen Eero, Benosman Ryad
INSERM UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision, Paris, France.
Department of Future Technologies, University of Turku, Turku, Finland.
Front Neurosci. 2018 Jun 12;12:373. doi: 10.3389/fnins.2018.00373. eCollection 2018.
This paper introduces an event-based methodology to perform arbitrary linear basis transformations that encompass a broad range of practically important signal transforms, such as the discrete Fourier transform (DFT) and the discrete wavelet transform (DWT). We present a complexity analysis of the proposed method, and show that the amount of required multiply-and-accumulate operations is reduced in comparison to frame-based method in natural video sequences, when the required temporal resolution is high enough. Experimental results on natural video sequences acquired by the asynchronous time-based neuromorphic image sensor (ATIS) are provided to support the feasibility of the method, and to illustrate the gain in computation resources.
本文介绍了一种基于事件的方法,用于执行任意线性基变换,该变换涵盖了广泛的实际重要信号变换,如离散傅里叶变换(DFT)和离散小波变换(DWT)。我们对所提出的方法进行了复杂度分析,并表明当所需的时间分辨率足够高时,与基于帧的方法相比,在自然视频序列中所需的乘法累加运算量会减少。提供了由基于异步时间的神经形态图像传感器(ATIS)采集的自然视频序列的实验结果,以支持该方法的可行性,并说明计算资源方面的优势。