• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高级驾驶辅助系统(ADAS)的大视野(FOV)相机传感器的可行自校准

Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS).

作者信息

Kakani Vijay, Kim Hakil, Kumbham Mahendar, Park Donghun, Jin Cheng-Bin, Nguyen Van Huan

机构信息

Information and Communication Engineering, Inha University, 100 Inharo, Nam-gu Incheon 22212, Korea.

Valeo Vision Systems, Dunmore Road, Tuam, Co. Galway H54, Ireland.

出版信息

Sensors (Basel). 2019 Jul 31;19(15):3369. doi: 10.3390/s19153369.

DOI:10.3390/s19153369
PMID:31370372
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6696342/
Abstract

This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on an advanced driver-assistance system (ADAS). Firstly, the proposed method performs a series of pre-processing steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. A novel straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that novel loss to optimize the lens-distortion parameters using the Levenberg-Marquardt (LM) optimization approach. The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. In this study, an investigation was carried out on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical γ -residual rectification factor. The quantitative comparisons were carried out between the proposed method and traditional OpenCV method as well as contemporary state-of-the-art self-calibration techniques on KITTI dataset with synthetically generated distortion ranges. Famous image consistency metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and position error in salient points estimation were employed for the performance evaluations. Finally, for a better performance validation of the proposed system on a real-time ADAS platform, a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.

摘要

本文提出了一种自校准方法,该方法可应用于先进驾驶辅助系统(ADAS)上的多个更大视场(FOV)相机模型。首先,所提出的方法执行一系列预处理步骤,如边缘检测、长度阈值处理和边缘分组,以便从初始失真线段池中分离出稳健的线段候选。对选定的线段候选施加了一种带有交叉熵损失的新型直线度成本约束,从而利用该新型损失通过Levenberg-Marquardt(LM)优化方法来优化镜头失真参数。最佳拟合的失真参数用于图像帧的去失真,从而在失真校正后的帧上执行各种基于高端视觉的任务。在本研究中,对诸如多相机系统之间的参数共享和特定模型的经验γ残差校正因子等实验方法进行了研究。在所提出的方法与传统OpenCV方法以及当代最先进的自校准技术之间,在具有合成生成失真范围的KITTI数据集上进行了定量比较。使用了诸如峰值信噪比(PSNR)、结构相似性指数(SSIM)以及显著点估计中的位置误差等著名的图像一致性指标进行性能评估。最后,为了在实时ADAS平台上对所提出的系统进行更好的性能验证,通过简化基于高端视觉的任务,如目标检测、定位和映射以及在去失真帧上的自动泊车,进行了一种务实的定性分析方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/18af3eb50ac2/sensors-19-03369-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/dfbc99ccd171/sensors-19-03369-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/5e05ca64d443/sensors-19-03369-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/0427c7ebbaab/sensors-19-03369-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/b3aa23cbd3d6/sensors-19-03369-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/d716014150a8/sensors-19-03369-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/503f8326aab2/sensors-19-03369-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/522f59d91e3d/sensors-19-03369-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/650e4e718ef9/sensors-19-03369-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/8589dc794891/sensors-19-03369-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/f51460e0a06d/sensors-19-03369-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/fd8e77fb7275/sensors-19-03369-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/5ddc30dbbf00/sensors-19-03369-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/0e56508586f2/sensors-19-03369-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/caa9af0f6a57/sensors-19-03369-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/cd15fd008a68/sensors-19-03369-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/230543f3534c/sensors-19-03369-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/da807c08037f/sensors-19-03369-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/ef392be599dd/sensors-19-03369-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/88e0e62393aa/sensors-19-03369-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/e74ec7e83f1d/sensors-19-03369-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/18af3eb50ac2/sensors-19-03369-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/dfbc99ccd171/sensors-19-03369-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/5e05ca64d443/sensors-19-03369-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/0427c7ebbaab/sensors-19-03369-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/b3aa23cbd3d6/sensors-19-03369-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/d716014150a8/sensors-19-03369-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/503f8326aab2/sensors-19-03369-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/522f59d91e3d/sensors-19-03369-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/650e4e718ef9/sensors-19-03369-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/8589dc794891/sensors-19-03369-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/f51460e0a06d/sensors-19-03369-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/fd8e77fb7275/sensors-19-03369-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/5ddc30dbbf00/sensors-19-03369-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/0e56508586f2/sensors-19-03369-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/caa9af0f6a57/sensors-19-03369-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/cd15fd008a68/sensors-19-03369-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/230543f3534c/sensors-19-03369-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/da807c08037f/sensors-19-03369-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/ef392be599dd/sensors-19-03369-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/88e0e62393aa/sensors-19-03369-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/e74ec7e83f1d/sensors-19-03369-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c024/6696342/18af3eb50ac2/sensors-19-03369-g020.jpg

相似文献

1
Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS).用于高级驾驶辅助系统(ADAS)的大视野(FOV)相机传感器的可行自校准
Sensors (Basel). 2019 Jul 31;19(15):3369. doi: 10.3390/s19153369.
2
Automatic Distortion Rectification of Wide-Angle Images Using Outlier Refinement for Streamlining Vision Tasks.利用异常值精炼实现广角图像自动失真矫正,以简化视觉任务。
Sensors (Basel). 2020 Feb 7;20(3):894. doi: 10.3390/s20030894.
3
Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement.具有共面约束的大视场相机的精确标定及其在大规模镜面三维轮廓测量中的应用。
Sensors (Basel). 2023 Mar 25;23(7):3464. doi: 10.3390/s23073464.
4
Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems.基于光流估计和高斯混合模型的运动目标检测在先进驾驶辅助系统中的应用
Sensors (Basel). 2019 Jul 22;19(14):3217. doi: 10.3390/s19143217.
5
Effect of Enhanced ADAS Camera Capability on Traffic State Estimation.增强型 ADAS 摄像机功能对交通状态估计的影响。
Sensors (Basel). 2021 Mar 12;21(6):1996. doi: 10.3390/s21061996.
6
An Intra-Vehicular Wireless Multimedia Sensor Network for Smartphone-Based Low-Cost Advanced Driver-Assistance Systems.基于智能手机的低成本先进驾驶辅助系统的车载无线多媒体传感器网络。
Sensors (Basel). 2022 Apr 15;22(8):3026. doi: 10.3390/s22083026.
7
High-precision method of binocular camera calibration with a distortion model.基于畸变模型的双目相机高精度标定方法。
Appl Opt. 2017 Mar 10;56(8):2368-2377. doi: 10.1364/AO.56.002368.
8
Radial distortion correction in a vision system.视觉系统中的径向畸变校正。
Appl Opt. 2016 Nov 1;55(31):8876-8883. doi: 10.1364/AO.55.008876.
9
Precise and robust binocular camera calibration based on multiple constraints.基于多重约束的精确且稳健的双目相机校准
Appl Opt. 2018 Jun 20;57(18):5130-5140. doi: 10.1364/AO.57.005130.
10
An efficient camera calibration technique offering robustness and accuracy over a wide range of lens distortion.一种高效的相机标定技术,在广泛的镜头失真范围内具有鲁棒性和准确性。
IEEE Trans Image Process. 2012 Feb;21(2):626-37. doi: 10.1109/TIP.2011.2164421. Epub 2011 Aug 12.

引用本文的文献

1
Implementation of Field-Programmable Gate Array Platform for Object Classification Tasks Using Spike-Based Backpropagated Deep Convolutional Spiking Neural Networks.基于脉冲反向传播深度卷积脉冲神经网络的现场可编程门阵列平台在目标分类任务中的实现。
Micromachines (Basel). 2023 Jun 30;14(7):1353. doi: 10.3390/mi14071353.
2
Multisensory Testing Framework for Advanced Driver Assistant Systems Supported by High-Quality 3D Simulation.多感官测试框架,用于支持高质量 3D 仿真的先进驾驶员辅助系统。
Sensors (Basel). 2021 Dec 18;21(24):8458. doi: 10.3390/s21248458.
3
Vision-Based Tactile Sensor Mechanism for the Estimation of Contact Position and Force Distribution Using Deep Learning.

本文引用的文献

1
Real-Time Semantic Segmentation for Fisheye Urban Driving Images Based on ERFNet.基于 ERFNet 的鱼眼城市驾驶图像实时语义分割。
Sensors (Basel). 2019 Jan 25;19(3):503. doi: 10.3390/s19030503.
2
Vision-Based People Detection System for Heavy Machine Applications.用于重型机械应用的基于视觉的人员检测系统
Sensors (Basel). 2016 Jan 20;16(1):128. doi: 10.3390/s16010128.
基于视觉的触觉传感器机制,使用深度学习估计接触位置和力分布。
Sensors (Basel). 2021 Mar 9;21(5):1920. doi: 10.3390/s21051920.
4
DoF-Dependent and Equal-Partition Based Lens Distortion Modeling and Calibration Method for Close-Range Photogrammetry.基于自由度相关和等分区的近景摄影测量镜头畸变建模与校准方法
Sensors (Basel). 2020 Oct 20;20(20):5934. doi: 10.3390/s20205934.
5
Surface Thermo-Dynamic Characterization of Poly (Vinylidene Chloride-Co-Acrylonitrile) (P(VDC-co-AN)) Using Inverse-Gas Chromatography and Investigation of Visual Traits Using Computer Vision Image Processing Algorithms.使用反相气相色谱法对聚(偏二氯乙烯-共-丙烯腈)(P(VDC-co-AN))进行表面热力学表征以及使用计算机视觉图像处理算法研究视觉特征
Polymers (Basel). 2020 Jul 23;12(8):1631. doi: 10.3390/polym12081631.
6
Automatic Distortion Rectification of Wide-Angle Images Using Outlier Refinement for Streamlining Vision Tasks.利用异常值精炼实现广角图像自动失真矫正,以简化视觉任务。
Sensors (Basel). 2020 Feb 7;20(3):894. doi: 10.3390/s20030894.