• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于极化彩色立体相机和 LiDAR 的 ADAS 障碍物检测、识别和融合

Unifying Obstacle Detection, Recognition, and Fusion Based on the Polarization Color Stereo Camera and LiDAR for the ADAS.

机构信息

Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.

Science and Technology on Space Intelligent Control Laboratory, Beijing Institute of Control Engineering, Beijing 100094, China.

出版信息

Sensors (Basel). 2022 Mar 23;22(7):2453. doi: 10.3390/s22072453.

DOI:10.3390/s22072453
PMID:35408068
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9003213/
Abstract

The perception module plays an important role in vehicles equipped with advanced driver-assistance systems (ADAS). This paper presents a multi-sensor data fusion system based on the polarization color stereo camera and the forward-looking light detection and ranging (LiDAR), which achieves the multiple target detection, recognition, and data fusion. The You Only Look Once v4 (YOLOv4) network is utilized to achieve object detection and recognition on the color images. The depth images are obtained from the rectified left and right images based on the principle of the epipolar constraints, then the obstacles are detected from the depth images using the MeanShift algorithm. The pixel-level polarization images are extracted from the raw polarization-grey images, then the water hazards are detected successfully. The PointPillars network is employed to detect the objects from the point cloud. The calibration and synchronization between the sensors are accomplished. The experiment results show that the data fusion enriches the detection results, provides high-dimensional perceptual information and extends the effective detection range. Meanwhile, the detection results are stable under diverse range and illumination conditions.

摘要

感知模块在配备先进驾驶员辅助系统(ADAS)的车辆中起着重要作用。本文提出了一种基于偏振彩色立体相机和前视光检测和测距(LiDAR)的多传感器数据融合系统,实现了多个目标的检测、识别和数据融合。利用 You Only Look Once v4(YOLOv4)网络对彩色图像进行目标检测和识别。深度图像是基于极线约束原理从校正后的左右图像中获得的,然后使用 MeanShift 算法从深度图像中检测障碍物。从原始偏振灰度图像中提取像素级偏振图像,然后成功检测到水害。使用 PointPillars 网络从点云中检测物体。完成了传感器的校准和同步。实验结果表明,数据融合丰富了检测结果,提供了高维感知信息并扩展了有效检测范围。同时,在不同的距离和光照条件下,检测结果也很稳定。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/b2437e882546/sensors-22-02453-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/e6379ee62f61/sensors-22-02453-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/3456367a16fa/sensors-22-02453-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/789168c8cc64/sensors-22-02453-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/731fd7c3fcf6/sensors-22-02453-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/e988440b7ac8/sensors-22-02453-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/006fc1c93268/sensors-22-02453-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/04aa5aae2b52/sensors-22-02453-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/a5c6d929cffe/sensors-22-02453-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/a23fc46cae71/sensors-22-02453-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/9b45833aa44c/sensors-22-02453-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/7efbf00bec58/sensors-22-02453-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/08dbe2e6f499/sensors-22-02453-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/bc2bece16929/sensors-22-02453-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/b2437e882546/sensors-22-02453-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/e6379ee62f61/sensors-22-02453-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/3456367a16fa/sensors-22-02453-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/789168c8cc64/sensors-22-02453-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/731fd7c3fcf6/sensors-22-02453-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/e988440b7ac8/sensors-22-02453-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/006fc1c93268/sensors-22-02453-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/04aa5aae2b52/sensors-22-02453-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/a5c6d929cffe/sensors-22-02453-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/a23fc46cae71/sensors-22-02453-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/9b45833aa44c/sensors-22-02453-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/7efbf00bec58/sensors-22-02453-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/08dbe2e6f499/sensors-22-02453-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/bc2bece16929/sensors-22-02453-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f90/9003213/b2437e882546/sensors-22-02453-g014.jpg

相似文献

1
Unifying Obstacle Detection, Recognition, and Fusion Based on the Polarization Color Stereo Camera and LiDAR for the ADAS.基于极化彩色立体相机和 LiDAR 的 ADAS 障碍物检测、识别和融合
Sensors (Basel). 2022 Mar 23;22(7):2453. doi: 10.3390/s22072453.
2
Unifying obstacle detection, recognition, and fusion based on millimeter wave radar and RGB-depth sensors for the visually impaired.基于毫米波雷达和RGB深度传感器的视障人士统一障碍物检测、识别与融合
Rev Sci Instrum. 2019 Apr;90(4):044102. doi: 10.1063/1.5093279.
3
ExistenceMap-PointPillars: A Multifusion Network for Robust 3D Object Detection with Object Existence Probability Map.存在映射-点柱体:一种利用物体存在概率图进行稳健三维目标检测的多融合网络。
Sensors (Basel). 2023 Oct 10;23(20):8367. doi: 10.3390/s23208367.
4
A LiDAR-Camera Joint Calibration Algorithm Based on Deep Learning.一种基于深度学习的激光雷达-相机联合校准算法
Sensors (Basel). 2024 Sep 18;24(18):6033. doi: 10.3390/s24186033.
5
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.自动驾驶车辆中的传感器与传感器融合技术:综述。
Sensors (Basel). 2021 Mar 18;21(6):2140. doi: 10.3390/s21062140.
6
Real-time depth completion based on LiDAR-stereo for autonomous driving.基于激光雷达-立体视觉的自动驾驶实时深度补全
Front Neurorobot. 2023 Apr 18;17:1124676. doi: 10.3389/fnbot.2023.1124676. eCollection 2023.
7
Multi-Task Foreground-Aware Network with Depth Completion for Enhanced RGB-D Fusion Object Detection Based on Transformer.基于Transformer的具有深度补全功能的多任务前景感知网络用于增强RGB-D融合目标检测
Sensors (Basel). 2024 Apr 8;24(7):2374. doi: 10.3390/s24072374.
8
Real time object detection using LiDAR and camera fusion for autonomous driving.基于激光雷达和相机融合的自动驾驶实时目标检测。
Sci Rep. 2023 May 17;13(1):8056. doi: 10.1038/s41598-023-35170-z.
9
Velocity Estimation from LiDAR Sensors Motion Distortion Effect.基于激光雷达传感器运动畸变效应的速度估计
Sensors (Basel). 2023 Nov 26;23(23):9426. doi: 10.3390/s23239426.
10
End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles.用于自动驾驶车辆的端到端多模态传感器数据集收集框架
Sensors (Basel). 2023 Jul 29;23(15):6783. doi: 10.3390/s23156783.

引用本文的文献

1
Radomizing an Antenna for a SAR-Based ETA Radar System While Ensuring Imaging Accuracy: A Focus on Phase Shifts.在确保成像精度的同时为基于合成孔径雷达(SAR)的ETA雷达系统优化天线:聚焦于相位偏移
Micromachines (Basel). 2025 Jun 17;16(6):720. doi: 10.3390/mi16060720.
2
Efficient three-dimensional point cloud object detection based on improved Complex-YOLO.基于改进型Complex-YOLO的高效三维点云目标检测
Front Neurorobot. 2023 Feb 16;17:1092564. doi: 10.3389/fnbot.2023.1092564. eCollection 2023.

本文引用的文献

1
Polarization-driven semantic segmentation via efficient attention-bridged fusion.通过高效注意力桥接融合实现极化驱动的语义分割
Opt Express. 2021 Feb 15;29(4):4802-4820. doi: 10.1364/OE.416130.
2
Salient Object Detection in the Deep Learning Era: An In-Depth Survey.深度学习时代的显著目标检测:深入调查。
IEEE Trans Pattern Anal Mach Intell. 2022 Jun;44(6):3239-3259. doi: 10.1109/TPAMI.2021.3051099. Epub 2022 May 5.
3
Snapshot multispectral imaging using a pixel-wise polarization color image sensor.使用逐像素偏振彩色图像传感器的快照多光谱成像。
Opt Express. 2020 Nov 9;28(23):34536-34573. doi: 10.1364/OE.402947.
4
Autonomous Dam Surveillance Robot System Based on Multi-Sensor Fusion.基于多传感器融合的自主式大坝监测机器人系统
Sensors (Basel). 2020 Feb 17;20(4):1097. doi: 10.3390/s20041097.
5
Unifying obstacle detection, recognition, and fusion based on millimeter wave radar and RGB-depth sensors for the visually impaired.基于毫米波雷达和RGB深度传感器的视障人士统一障碍物检测、识别与融合
Rev Sci Instrum. 2019 Apr;90(4):044102. doi: 10.1063/1.5093279.
6
Object Detection With Deep Learning: A Review.基于深度学习的目标检测研究综述。
IEEE Trans Neural Netw Learn Syst. 2019 Nov;30(11):3212-3232. doi: 10.1109/TNNLS.2018.2876865. Epub 2019 Jan 28.
7
Detecting Traversable Area and Water Hazards for the Visually Impaired with a pRGB-D Sensor.使用伪彩色深度(pRGB-D)传感器为视障人士检测可通行区域和水患
Sensors (Basel). 2017 Aug 17;17(8):1890. doi: 10.3390/s17081890.
8
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.