• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

智能车辆的单目相机传感器在对抗性光照下的映射。

Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles.

机构信息

School of Automotive Studies, Tongji University, Shanghai 201804, China.

出版信息

Sensors (Basel). 2023 Mar 21;23(6):3296. doi: 10.3390/s23063296.

DOI:10.3390/s23063296
PMID:36992006
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10058667/
Abstract

High-precision maps are widely applied in intelligent-driving vehicles for localization and planning tasks. The vision sensor, especially monocular cameras, has become favoured in mapping approaches due to its high flexibility and low cost. However, monocular visual mapping suffers from great performance degradation in adversarial illumination environments such as on low-light roads or in underground spaces. To address this issue, in this paper, we first introduce an unsupervised learning approach to improve keypoint detection and description on monocular camera images. By emphasizing the consistency between feature points in the learning loss, visual features in dim environment can be better extracted. Second, to suppress the scale drift in monocular visual mapping, a robust loop-closure detection scheme is presented, which integrates both feature-point verification and multi-grained image similarity measurements. With experiments on public benchmarks, our keypoint detection approach is proven robust against varied illumination. With scenario tests including both underground and on-road driving, we demonstrate that our approach is able to reduce the scale drift in reconstructing the scene and achieve a mapping accuracy gain of up to 0.14 m in textureless or low-illumination environments.

摘要

高精度地图在智能驾驶车辆的定位和规划任务中得到了广泛应用。视觉传感器,特别是单目相机,由于其高灵活性和低成本,在制图方法中受到青睐。然而,单目视觉制图在对抗光照环境(如低光照道路或地下空间)中性能严重下降。针对这个问题,在本文中,我们首先介绍了一种无监督学习方法,以提高单目相机图像上的关键点检测和描述。通过在学习损失中强调特征点之间的一致性,可以更好地提取暗环境中的视觉特征。其次,为了抑制单目视觉制图中的尺度漂移,提出了一种鲁棒的闭环检测方案,该方案结合了特征点验证和多粒度图像相似性度量。在公共基准上的实验表明,我们的关键点检测方法在不同光照条件下具有鲁棒性。通过包括地下和道路驾驶在内的场景测试,我们证明我们的方法能够减少场景重建中的尺度漂移,并在无纹理或低光照环境中实现高达 0.14 米的映射精度增益。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/fe6c7c136da9/sensors-23-03296-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/b8bbe4008660/sensors-23-03296-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/9b255e269338/sensors-23-03296-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/3ddfa61aaca4/sensors-23-03296-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/b352b36fb3fd/sensors-23-03296-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/15a688874412/sensors-23-03296-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/1ec6a4e1b897/sensors-23-03296-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/b0ec37ed2b22/sensors-23-03296-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/8e254bb2ba1a/sensors-23-03296-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/c338baa15130/sensors-23-03296-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/5ba59f5b4134/sensors-23-03296-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/562918a9fa61/sensors-23-03296-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/fe6c7c136da9/sensors-23-03296-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/b8bbe4008660/sensors-23-03296-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/9b255e269338/sensors-23-03296-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/3ddfa61aaca4/sensors-23-03296-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/b352b36fb3fd/sensors-23-03296-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/15a688874412/sensors-23-03296-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/1ec6a4e1b897/sensors-23-03296-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/b0ec37ed2b22/sensors-23-03296-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/8e254bb2ba1a/sensors-23-03296-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/c338baa15130/sensors-23-03296-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/5ba59f5b4134/sensors-23-03296-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/562918a9fa61/sensors-23-03296-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/23eb/10058667/fe6c7c136da9/sensors-23-03296-g012.jpg

相似文献

1
Mapping with Monocular Camera Sensor under Adversarial Illumination for Intelligent Vehicles.智能车辆的单目相机传感器在对抗性光照下的映射。
Sensors (Basel). 2023 Mar 21;23(6):3296. doi: 10.3390/s23063296.
2
Interactive Attention Learning on Detection of Lane and Lane Marking on the Road by Monocular Camera Image.基于单目相机图像的道路车道和车道线检测中的交互式注意力学习
Sensors (Basel). 2023 Jul 20;23(14):6545. doi: 10.3390/s23146545.
3
Monocular Localization with Vector HD Map (MLVHM): A Low-Cost Method for Commercial IVs.基于矢量高清地图的单目定位(MLVHM):一种用于商用智能车辆的低成本方法。
Sensors (Basel). 2020 Mar 27;20(7):1870. doi: 10.3390/s20071870.
4
Monocular Absolute Depth Estimation from Motion for Small Unmanned Aerial Vehicles by Geometry-Based Scale Recovery.基于几何尺度恢复的小型无人机运动单目绝对深度估计
Sensors (Basel). 2024 Jul 13;24(14):4541. doi: 10.3390/s24144541.
5
A Coupled Visual and Inertial Measurement Units Method for Locating and Mapping in Coal Mine Tunnel.一种用于煤矿隧道定位和制图的视觉与惯性测量单元耦合方法。
Sensors (Basel). 2022 Sep 30;22(19):7437. doi: 10.3390/s22197437.
6
Masked GAN for Unsupervised Depth and Pose Prediction With Scale Consistency.用于具有尺度一致性的无监督深度和姿态预测的掩码生成对抗网络
IEEE Trans Neural Netw Learn Syst. 2021 Dec;32(12):5392-5403. doi: 10.1109/TNNLS.2020.3044181. Epub 2021 Nov 30.
7
SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.基于 SLAM 的单目微创手术中密集表面重建及其在增强现实中的应用。
Comput Methods Programs Biomed. 2018 May;158:135-146. doi: 10.1016/j.cmpb.2018.02.006. Epub 2018 Feb 8.
8
SLAM and 3D Semantic Reconstruction Based on the Fusion of Lidar and Monocular Vision.基于激光雷达和单目视觉融合的 SLAM 和 3D 语义重建。
Sensors (Basel). 2023 Jan 29;23(3):1502. doi: 10.3390/s23031502.
9
Integration of Low-Cost GNSS and Monocular Cameras for Simultaneous Localization and Mapping.低成本 GNSS 和单目相机的集成,用于同时定位与建图。
Sensors (Basel). 2018 Jul 7;18(7):2193. doi: 10.3390/s18072193.
10
Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data.基于 1D 激光测距仪和视觉数据融合的单目同时定位与建图 (SLAM) 的尺度估计与校正。
Sensors (Basel). 2018 Jun 15;18(6):1948. doi: 10.3390/s18061948.

本文引用的文献

1
Indoor Location Technology with High Accuracy Using Simple Visual Tags.利用简单的视觉标签实现高精度室内定位技术。
Sensors (Basel). 2023 Feb 1;23(3):1597. doi: 10.3390/s23031597.
2
An Adaptive ORB-SLAM3 System for Outdoor Dynamic Environments.一种用于室外动态环境的自适应 ORB-SLAM3 系统。
Sensors (Basel). 2023 Jan 25;23(3):1359. doi: 10.3390/s23031359.
3
A Monocular-Visual SLAM System with Semantic and Optical-Flow Fusion for Indoor Dynamic Environments.一种用于室内动态环境的具有语义和光流融合的单目视觉同步定位与地图构建系统。
Micromachines (Basel). 2022 Nov 17;13(11):2006. doi: 10.3390/mi13112006.
4
An Improved ASIFT Image Feature Matching Algorithm Based on POS Information.一种基于POS信息的改进型ASIFT图像特征匹配算法
Sensors (Basel). 2022 Oct 12;22(20):7749. doi: 10.3390/s22207749.
5
Vision-based Parking-slot Detection: A DCNN-based Approach and A Large-scale Benchmark Dataset.基于视觉的停车位检测:一种基于深度卷积神经网络的方法及大规模基准数据集
IEEE Trans Image Process. 2018 Jul 18. doi: 10.1109/TIP.2018.2857407.
6
BRIEF: Computing a Local Binary Descriptor Very Fast.简介:快速计算局部二值描述符。
IEEE Trans Pattern Anal Mach Intell. 2012 Jul;34(7):1281-98. doi: 10.1109/TPAMI.2011.222. Epub 2011 Nov 15.