• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

FF-HPINet:一种用于车道检测的翻转特征与分层位置信息提取网络。

FF-HPINet: A Flipped Feature and Hierarchical Position Information Extraction Network for Lane Detection.

作者信息

Zhou Xiaofeng, Zhang Peng

机构信息

School of Electronics and Communication Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.

出版信息

Sensors (Basel). 2024 May 29;24(11):3502. doi: 10.3390/s24113502.

DOI:10.3390/s24113502
PMID:38894293
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11174791/
Abstract

Effective lane detection technology plays an important role in the current autonomous driving system. Although deep learning models, with their intricate network designs, have proven highly capable of detecting lanes, there persist key areas requiring attention. Firstly, the symmetry inherent in visuals captured by forward-facing automotive cameras is an underexploited resource. Secondly, the vast potential of position information remains untapped, which can undermine detection precision. In response to these challenges, we propose FF-HPINet, a novel approach for lane detection. We introduce the Flipped Feature Extraction module, which models pixel pairwise relationships between the flipped feature and the original feature. This module allows us to capture symmetrical features and obtain high-level semantic feature maps from different receptive fields. Additionally, we design the Hierarchical Position Information Extraction module to meticulously mine the position information of the lanes, vastly improving target identification accuracy. Furthermore, the Deformable Context Extraction module is proposed to distill vital foreground elements and contextual nuances from the surrounding environment, yielding focused and contextually apt feature representations. Our approach achieves excellent performance with the F1 score of 97.00% on the TuSimple dataset and 76.84% on the CULane dataset.

摘要

有效的车道检测技术在当前的自动驾驶系统中起着重要作用。尽管深度学习模型凭借其复杂的网络设计已被证明具有很高的车道检测能力,但仍存在一些关键问题需要关注。首先,前置汽车摄像头捕捉的视觉图像中固有的对称性是一种未被充分利用的资源。其次,位置信息的巨大潜力仍未得到挖掘,这可能会影响检测精度。针对这些挑战,我们提出了FF-HPINet,一种新颖的车道检测方法。我们引入了翻转特征提取模块,该模块对翻转特征与原始特征之间的像素成对关系进行建模。该模块使我们能够捕捉对称特征,并从不同的感受野中获得高级语义特征图。此外,我们设计了分层位置信息提取模块,以精心挖掘车道的位置信息,极大地提高目标识别精度。此外,还提出了可变形上下文提取模块,以从周围环境中提炼出重要的前景元素和上下文细微差别,从而产生聚焦且上下文合适的特征表示。我们的方法在TuSimple数据集上的F1分数为97.00%,在CULane数据集上的F1分数为76.84%,取得了优异的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/5f9746ba29f6/sensors-24-03502-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/3325f4bdfe3d/sensors-24-03502-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/4c0881120908/sensors-24-03502-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/8cdb3244e68e/sensors-24-03502-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/f7dbdcc70609/sensors-24-03502-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/d726307362ed/sensors-24-03502-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/cb8fc7dd36aa/sensors-24-03502-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/5f9746ba29f6/sensors-24-03502-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/3325f4bdfe3d/sensors-24-03502-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/4c0881120908/sensors-24-03502-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/8cdb3244e68e/sensors-24-03502-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/f7dbdcc70609/sensors-24-03502-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/d726307362ed/sensors-24-03502-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/cb8fc7dd36aa/sensors-24-03502-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f28a/11174791/5f9746ba29f6/sensors-24-03502-g007.jpg

相似文献

1
FF-HPINet: A Flipped Feature and Hierarchical Position Information Extraction Network for Lane Detection.FF-HPINet:一种用于车道检测的翻转特征与分层位置信息提取网络。
Sensors (Basel). 2024 May 29;24(11):3502. doi: 10.3390/s24113502.
2
Proportional feature pyramid network based on weight fusion for lane detection.基于权重融合的比例特征金字塔网络用于车道检测。
PeerJ Comput Sci. 2024 Jan 29;10:e1824. doi: 10.7717/peerj-cs.1824. eCollection 2024.
3
Interactive Attention Learning on Detection of Lane and Lane Marking on the Road by Monocular Camera Image.基于单目相机图像的道路车道和车道线检测中的交互式注意力学习
Sensors (Basel). 2023 Jul 20;23(14):6545. doi: 10.3390/s23146545.
4
Effective lane detection on complex roads with convolutional attention mechanism in autonomous vehicles.自动驾驶车辆中基于卷积注意力机制的复杂道路有效车道检测
Sci Rep. 2024 Aug 19;14(1):19193. doi: 10.1038/s41598-024-70116-z.
5
LHFFNet: A hybrid feature fusion method for lane detection.LHFFNet:一种用于车道检测的混合特征融合方法。
Sci Rep. 2024 Jul 16;14(1):16353. doi: 10.1038/s41598-024-66913-1.
6
LLDNet: A Lightweight Lane Detection Approach for Autonomous Cars Using Deep Learning.基于深度学习的自动驾驶汽车轻量化车道检测方法(LLDNet)
Sensors (Basel). 2022 Jul 26;22(15):5595. doi: 10.3390/s22155595.
7
Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection.结合低光场景增强的快速准确车道检测。
Sensors (Basel). 2023 May 19;23(10):4917. doi: 10.3390/s23104917.
8
Fast and Accurate Lane Detection via Graph Structure and Disentangled Representation Learning.基于图结构和去纠缠表示学习的快速准确车道检测。
Sensors (Basel). 2021 Jul 7;21(14):4657. doi: 10.3390/s21144657.
9
Multimodal Fusion Network for 3-D Lane Detection.用于三维车道检测的多模态融合网络。
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):6054-6066. doi: 10.1109/TNNLS.2024.3398654. Epub 2025 Apr 4.
10
A deep learning approach for lane marking detection applying encode-decode instant segmentation network.一种应用编码-解码即时分割网络的深度学习车道线检测方法。
Heliyon. 2023 Mar 3;9(3):e14212. doi: 10.1016/j.heliyon.2023.e14212. eCollection 2023 Mar.

本文引用的文献

1
Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling.基于视觉Transformer 集成的磁共振成像脑肿瘤分类。
Curr Oncol. 2022 Oct 7;29(10):7498-7511. doi: 10.3390/curroncol29100590.
2
Ultra Fast Deep Lane Detection With Hybrid Anchor Driven Ordinal Classification.基于混合锚点驱动有序分类的超快速深层车道检测
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2555-2568. doi: 10.1109/TPAMI.2022.3182097. Epub 2024 Apr 3.
3
VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection.
VSSA-NET:用于交通标志检测的垂直空间序列注意力网络
IEEE Trans Image Process. 2019 Jul;28(7):3423-3434. doi: 10.1109/TIP.2019.2896952. Epub 2019 Feb 1.
4
The role of context in object recognition.语境在物体识别中的作用。
Trends Cogn Sci. 2007 Dec;11(12):520-7. doi: 10.1016/j.tics.2007.09.009. Epub 2007 Nov 19.