Zhang Pei, Liu Haosen, Ge Zhou, Wang Chutian, Lam Edmund Y
IEEE Trans Image Process. 2024;33:2318-2333. doi: 10.1109/TIP.2024.3374074. Epub 2024 Mar 21.
Neuromorphic imaging reacts to per-pixel brightness changes of a dynamic scene with high temporal precision and responds with asynchronous streaming events as a result. It also often supports a simultaneous output of an intensity image. Nevertheless, the raw events typically involve a large amount of noise due to the high sensitivity of the sensor, while capturing fast-moving objects at low frame rates results in blurry images. These deficiencies significantly degrade human observation and machine processing. Fortunately, the two information sources are inherently complementary - events with microsecond-level temporal resolution, which are triggered by the edges of objects recorded in a latent sharp image, can supply rich motion details missing from the blurry one. In this work, we bring the two types of data together and introduce a simple yet effective unifying algorithm to jointly reconstruct blur-free images and noise-robust events in an iterative coarse-to-fine fashion. Specifically, an event-regularized prior offers precise high-frequency structures and dynamic features for blind deblurring, while image gradients serve as a kind of faithful supervision in regulating neuromorphic noise removal. Comprehensively evaluated on real and synthetic samples, such a synergy delivers superior reconstruction quality for both images with severe motion blur and raw event streams with a storm of noise, and also exhibits greater robustness to challenging realistic scenarios such as varying levels of illumination, contrast and motion magnitude. Meanwhile, it can be driven by much fewer events and holds a competitive edge at computational time overhead, rendering itself preferable as available computing resources are limited. Our solution gives impetus to the improvement of both sensing data and paves the way for highly accurate neuromorphic reasoning and analysis.
神经形态成像能够以高时间精度对动态场景的每个像素亮度变化做出反应,并以异步流事件作为响应。它通常还支持强度图像的同步输出。然而,由于传感器的高灵敏度,原始事件通常包含大量噪声,而以低帧率捕捉快速移动的物体则会导致图像模糊。这些缺陷显著降低了人类观察和机器处理的效果。幸运的是,这两种信息源在本质上是互补的——由潜在清晰图像中记录的物体边缘触发的具有微秒级时间分辨率的事件,可以提供模糊图像中缺失的丰富运动细节。在这项工作中,我们将这两种类型的数据结合起来,并引入一种简单而有效的统一算法,以迭代的粗到细方式联合重建无模糊图像和抗噪声事件。具体来说,事件正则化先验为盲去模糊提供精确的高频结构和动态特征,而图像梯度在调节神经形态噪声去除方面作为一种可靠的监督。在真实和合成样本上进行全面评估后,这种协同作用为具有严重运动模糊的图像和带有大量噪声的原始事件流都提供了卓越的重建质量,并且在面对诸如不同光照水平、对比度和运动幅度等具有挑战性的现实场景时也表现出更强的鲁棒性。同时,它可以由少得多的事件驱动,并且在计算时间开销方面具有竞争优势,在可用计算资源有限的情况下使其更具优势。我们的解决方案推动了传感数据的改进,并为高精度神经形态推理和分析铺平了道路。