The Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel.
Bioinspir Biomim. 2024 Mar 4;19(3). doi: 10.1088/1748-3190/ad2a7c.
Neuromorphic event-based cameras communicate transients in luminance instead of frames, providing visual information with a fine temporal resolution, high dynamic range and high signal-to-noise ratio. Enriching event data with color information allows for the reconstruction of colorful frame-like intensity maps, supporting improved performance and visually appealing results in various computer vision tasks. In this work, we simulated a biologically inspired color fusion system featuring a three-stage convolutional neural network for reconstructing color intensity maps from event data and sparse color cues. While current approaches for color fusion use full RGB frames in high resolution, our design uses event data and low-spatial and tonal-resolution quantized color cues, providing a high-performing small model for efficient colorful image reconstruction. The proposed model outperforms existing coloring schemes in terms of SSIM, LPIPS, PSNR, and CIEDE2000 metrics. We demonstrate that auxiliary limited color information can be used in conjunction with event data to successfully reconstruct both color and intensity frames, paving the way for more efficient hardware designs.
神经形态事件驱动摄像机以亮度而非帧的方式传输瞬变信息,提供具有精细时间分辨率、高动态范围和高信噪比的视觉信息。通过丰富事件数据的颜色信息,可以重建彩色的帧状强度图,从而在各种计算机视觉任务中实现更好的性能和更具吸引力的视觉效果。在这项工作中,我们模拟了一种受生物启发的颜色融合系统,该系统采用了一个三阶段卷积神经网络,用于从事件数据和稀疏颜色线索中重建彩色强度图。虽然当前的颜色融合方法使用高分辨率的全 RGB 帧,但我们的设计使用事件数据和低空间分辨率及色调分辨率的量化颜色线索,为高效的彩色图像重建提供了一个高性能的小模型。所提出的模型在 SSIM、LPIPS、PSNR 和 CIEDE2000 度量方面优于现有的着色方案。我们证明了辅助有限的颜色信息可以与事件数据结合使用,成功重建颜色和强度帧,为更高效的硬件设计铺平了道路。