• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

结合低光场景增强的快速准确车道检测。

Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection.

机构信息

School of Science, Beijing University of Civil Engineering and Architecture, Beijing 102616, China.

School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing 102616, China.

出版信息

Sensors (Basel). 2023 May 19;23(10):4917. doi: 10.3390/s23104917.

DOI:10.3390/s23104917
PMID:37430833
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10223488/
Abstract

Lane detection is a crucial task in the field of autonomous driving, as it enables vehicles to safely navigate on the road by interpreting the high-level semantics of traffic signs. Unfortunately, lane detection is a challenging problem due to factors such as low-light conditions, occlusions, and lane line blurring. These factors increase the perplexity and indeterminacy of the lane features, making them hard to distinguish and segment. To tackle these challenges, we propose a method called low-light enhancement fast lane detection (LLFLD) that integrates the automatic low-light scene enhancement network (ALLE) with the lane detection network to improve lane detection performance under low-light conditions. Specifically, we first utilize the ALLE network to enhance the input image's brightness and contrast while reducing excessive noise and color distortion. Then, we introduce symmetric feature flipping module (SFFM) and channel fusion self-attention mechanism (CFSAT) to the model, which refine the low-level features and utilize more abundant global contextual information, respectively. Moreover, we devise a novel structural loss function that leverages the inherent prior geometric constraints of lanes to optimize the detection results. We evaluate our method on the CULane dataset, a public benchmark for lane detection in various lighting conditions. Our experiments show that our approach surpasses other state of the arts in both daytime and nighttime settings, especially in low-light scenarios.

摘要

车道检测是自动驾驶领域的一项关键任务,因为它能够使车辆通过解释交通标志的高级语义安全地在道路上行驶。不幸的是,车道检测是一个具有挑战性的问题,因为低光照条件、遮挡和车道线模糊等因素。这些因素增加了车道特征的复杂性和不确定性,使得它们难以区分和分割。为了解决这些挑战,我们提出了一种名为低光增强快速车道检测(LLFLD)的方法,该方法将自动低光场景增强网络(ALLE)与车道检测网络集成在一起,以提高低光条件下的车道检测性能。具体来说,我们首先利用 ALLE 网络来增强输入图像的亮度和对比度,同时减少过度的噪声和颜色失真。然后,我们引入了对称特征翻转模块(SFFM)和通道融合自注意力机制(CFSAT)到模型中,分别对低层次特征进行细化,并利用更丰富的全局上下文信息。此外,我们设计了一种新的结构损失函数,利用车道的固有几何约束来优化检测结果。我们在 CULane 数据集上评估了我们的方法,该数据集是用于各种光照条件下车道检测的公共基准。我们的实验表明,我们的方法在白天和夜间环境下都优于其他最先进的方法,特别是在低光场景下。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/fa694cbb86ce/sensors-23-04917-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/86372c671783/sensors-23-04917-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/e93fcf30670c/sensors-23-04917-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/cfeebf433980/sensors-23-04917-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/5e2c99a28e39/sensors-23-04917-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/7f7ed7f33d4f/sensors-23-04917-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/fa694cbb86ce/sensors-23-04917-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/86372c671783/sensors-23-04917-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/e93fcf30670c/sensors-23-04917-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/cfeebf433980/sensors-23-04917-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/5e2c99a28e39/sensors-23-04917-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/7f7ed7f33d4f/sensors-23-04917-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6343/10223488/fa694cbb86ce/sensors-23-04917-g006.jpg

相似文献

1
Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection.结合低光场景增强的快速准确车道检测。
Sensors (Basel). 2023 May 19;23(10):4917. doi: 10.3390/s23104917.
2
ASA-BiSeNet: improved real-time approach for road lane semantic segmentation of low-light autonomous driving road scenes.ASA-双向分割网络:用于低光照自动驾驶道路场景车道语义分割的改进实时方法。
Appl Opt. 2023 Jul 1;62(19):5224-5235. doi: 10.1364/AO.486302.
3
Interactive Attention Learning on Detection of Lane and Lane Marking on the Road by Monocular Camera Image.基于单目相机图像的道路车道和车道线检测中的交互式注意力学习
Sensors (Basel). 2023 Jul 20;23(14):6545. doi: 10.3390/s23146545.
4
Real-time lane detection model based on non bottleneck skip residual connections and attention pyramids.基于非瓶颈跳残差连接和注意力金字塔的实时车道检测模型。
PLoS One. 2021 Oct 19;16(10):e0252755. doi: 10.1371/journal.pone.0252755. eCollection 2021.
5
The geometric attention-aware network for lane detection in complex road scenes.用于复杂道路场景中车道检测的几何注意感知网络。
PLoS One. 2021 Jul 15;16(7):e0254521. doi: 10.1371/journal.pone.0254521. eCollection 2021.
6
LHFFNet: A hybrid feature fusion method for lane detection.LHFFNet:一种用于车道检测的混合特征融合方法。
Sci Rep. 2024 Jul 16;14(1):16353. doi: 10.1038/s41598-024-66913-1.
7
Effective lane detection on complex roads with convolutional attention mechanism in autonomous vehicles.自动驾驶车辆中基于卷积注意力机制的复杂道路有效车道检测
Sci Rep. 2024 Aug 19;14(1):19193. doi: 10.1038/s41598-024-70116-z.
8
Ultra Fast Deep Lane Detection With Hybrid Anchor Driven Ordinal Classification.基于混合锚点驱动有序分类的超快速深层车道检测
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2555-2568. doi: 10.1109/TPAMI.2022.3182097. Epub 2024 Apr 3.
9
Reliable Road Scene Interpretation Based on ITOM with the Integrated Fusion of Vehicle and Lane Tracker in Dense Traffic Situation.基于ITOM且在密集交通场景中集成车辆与车道跟踪器融合的可靠道路场景解释。
Sensors (Basel). 2020 Apr 26;20(9):2457. doi: 10.3390/s20092457.
10
LLDNet: A Lightweight Lane Detection Approach for Autonomous Cars Using Deep Learning.基于深度学习的自动驾驶汽车轻量化车道检测方法(LLDNet)
Sensors (Basel). 2022 Jul 26;22(15):5595. doi: 10.3390/s22155595.

本文引用的文献

1
Low-Light Image and Video Enhancement Using Deep Learning: A Survey.基于深度学习的低光照图像与视频增强:综述
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9396-9416. doi: 10.1109/TPAMI.2021.3126387. Epub 2022 Nov 7.
2
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.DeepLab:基于深度卷积网络、空洞卷积和全连接条件随机场的语义图像分割。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.
3
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.
更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
4
A computational approach to edge detection.一种基于计算的边缘检测方法。
IEEE Trans Pattern Anal Mach Intell. 1986 Jun;8(6):679-98.
5
Reducing the dimensionality of data with neural networks.使用神经网络降低数据维度。
Science. 2006 Jul 28;313(5786):504-7. doi: 10.1126/science.1127647.