Wang Zhaoqing, Wan Tianqing, Ma Sijie, Chai Yang
Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
Nat Nanotechnol. 2024 Jul;19(7):919-930. doi: 10.1038/s41565-024-01665-7. Epub 2024 Jun 14.
The visual scene in the physical world integrates multidimensional information (spatial, temporal, polarization, spectrum and so on) and typically shows unstructured characteristics. Conventional image sensors cannot process this multidimensional vision data, creating a need for vision sensors that can efficiently extract features from substantial multidimensional vision data. Vision sensors are able to transform the unstructured visual scene into featured information without relying on sophisticated algorithms and complex hardware. The response characteristics of sensors can be abstracted into operators with specific functionalities, allowing for the efficient processing of perceptual information. In this Review, we delve into the hardware implementation of multidimensional vision sensors, exploring their working mechanisms and design principles. We exemplify multidimensional vision sensors built on emerging devices and silicon-based system integration. We further provide benchmarking metrics for multidimensional vision sensors and conclude with the principle of device-system co-design and co-optimization.
物理世界中的视觉场景整合了多维信息(空间、时间、偏振、光谱等),并且通常呈现出非结构化的特征。传统图像传感器无法处理这种多维视觉数据,因此需要能够从大量多维视觉数据中有效提取特征的视觉传感器。视觉传感器能够将非结构化的视觉场景转化为特征信息,而无需依赖复杂的算法和硬件。传感器的响应特性可以抽象为具有特定功能的算子,从而实现对感知信息的高效处理。在本综述中,我们深入探讨多维视觉传感器的硬件实现,探究其工作机制和设计原则。我们举例说明了基于新兴器件和硅基系统集成构建的多维视觉传感器。我们还进一步提供了多维视觉传感器的基准测试指标,并以器件-系统协同设计与协同优化原则作为总结。