Suppr超能文献

用于高级驾驶辅助系统(ADAS)的大视野(FOV)相机传感器的可行自校准

Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS).

作者信息

Kakani Vijay, Kim Hakil, Kumbham Mahendar, Park Donghun, Jin Cheng-Bin, Nguyen Van Huan

机构信息

Information and Communication Engineering, Inha University, 100 Inharo, Nam-gu Incheon 22212, Korea.

Valeo Vision Systems, Dunmore Road, Tuam, Co. Galway H54, Ireland.

出版信息

Sensors (Basel). 2019 Jul 31;19(15):3369. doi: 10.3390/s19153369.

Abstract

This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on an advanced driver-assistance system (ADAS). Firstly, the proposed method performs a series of pre-processing steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. A novel straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that novel loss to optimize the lens-distortion parameters using the Levenberg-Marquardt (LM) optimization approach. The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. In this study, an investigation was carried out on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical γ -residual rectification factor. The quantitative comparisons were carried out between the proposed method and traditional OpenCV method as well as contemporary state-of-the-art self-calibration techniques on KITTI dataset with synthetically generated distortion ranges. Famous image consistency metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and position error in salient points estimation were employed for the performance evaluations. Finally, for a better performance validation of the proposed system on a real-time ADAS platform, a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.

摘要

本文提出了一种自校准方法,该方法可应用于先进驾驶辅助系统(ADAS)上的多个更大视场(FOV)相机模型。首先,所提出的方法执行一系列预处理步骤,如边缘检测、长度阈值处理和边缘分组,以便从初始失真线段池中分离出稳健的线段候选。对选定的线段候选施加了一种带有交叉熵损失的新型直线度成本约束,从而利用该新型损失通过Levenberg-Marquardt(LM)优化方法来优化镜头失真参数。最佳拟合的失真参数用于图像帧的去失真,从而在失真校正后的帧上执行各种基于高端视觉的任务。在本研究中,对诸如多相机系统之间的参数共享和特定模型的经验γ残差校正因子等实验方法进行了研究。在所提出的方法与传统OpenCV方法以及当代最先进的自校准技术之间,在具有合成生成失真范围的KITTI数据集上进行了定量比较。使用了诸如峰值信噪比(PSNR)、结构相似性指数(SSIM)以及显著点估计中的位置误差等著名的图像一致性指标进行性能评估。最后,为了在实时ADAS平台上对所提出的系统进行更好的性能验证,通过简化基于高端视觉的任务,如目标检测、定位和映射以及在去失真帧上的自动泊车,进行了一种务实的定性分析方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/dfbc99ccd171/sensors-19-03369-g0A1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验