• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

LHFFNet:一种用于车道检测的混合特征融合方法。

LHFFNet: A hybrid feature fusion method for lane detection.

作者信息

Kao Youchen, Che Shengbing, Zhou Sha, Guo Shenyi, Zhang Xu, Wang Wanqin

机构信息

School of Computer and Mathematics, Central South University of Forestry and Technology, Changsha, 410004, Hunan, China.

School of Electronic Information and Physics, Central South University of Forestry and Technology, Changsha, 410004, Hunan, China.

出版信息

Sci Rep. 2024 Jul 16;14(1):16353. doi: 10.1038/s41598-024-66913-1.

DOI:10.1038/s41598-024-66913-1
PMID:39013975
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11252289/
Abstract

Lane line images have the essential attribute of large-scale variation and complex scene information, and the similarity between adjacent lane lines is high, which can easily cause classification errors. And remote lane lines are difficult to recognize due to visual angle changes in width. To address this issue, this paper proposes an effective lane detection framework, which is a hybrid feature fusion network that enhances multiple spatial features and distinguishes key features throughout the entire lane line segment. It enhances and fuses lane line features at multiscale to enhance the feature representation of lane line images, especially at the far end. Firstly, in order to enhance the correlation of multiscale lane features, a multi-head self attention is used to construct a multi-space attention enhancement module for feature enhancement in multispace. Secondly, a spatial separable convolutional branch is designed for the jumping layer structure connecting multiscale lane line features. While retaining feature information of different scales, important lane areas in multiscale feature information are emphasized through the allocation of spatial attention weights. Finally, considering that lane lines are elongated areas in the image, and the background information in the image is much more abundant than lane line information, the flexibility of traditional pooling operations in capturing widely existing anisotropic contexts in actual environments is limited. Therefore, before embedding feature output branches, strip pooling is introduced to refine the representation of lane line information and optimize model performance. The experimental results show that the accuracy on the TuSimple dataset reaches 96.84%, and the F1 score on the CULane dataset reaches 75.9%.

摘要

车道线图像具有尺度变化大、场景信息复杂的本质属性,且相邻车道线之间的相似度较高,这很容易导致分类错误。此外,由于视角在宽度上的变化,远距离车道线难以识别。为了解决这个问题,本文提出了一种有效的车道检测框架,它是一种混合特征融合网络,能够增强多个空间特征并在整个车道线段中区分关键特征。它在多尺度上增强并融合车道线特征,以增强车道线图像的特征表示,尤其是在远端。首先,为了增强多尺度车道特征的相关性,使用多头自注意力构建一个多空间注意力增强模块,用于在多个空间中进行特征增强。其次,为连接多尺度车道线特征的跳跃层结构设计了一个空间可分离卷积分支。在保留不同尺度的特征信息的同时,通过空间注意力权重的分配来强调多尺度特征信息中的重要车道区域。最后,考虑到车道线是图像中的细长区域,且图像中的背景信息比车道线信息丰富得多,传统池化操作在捕获实际环境中广泛存在的各向异性上下文方面的灵活性有限。因此,在嵌入特征输出分支之前,引入条带池化来细化车道线信息的表示并优化模型性能。实验结果表明,在TuSimple数据集上的准确率达到96.84%,在CULane数据集上的F1分数达到75.9%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/37d1a7dcfb40/41598_2024_66913_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/52795a396725/41598_2024_66913_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/54cedd89ef03/41598_2024_66913_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/d2f1a73adb81/41598_2024_66913_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/510711af4f8f/41598_2024_66913_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/f96000ca6991/41598_2024_66913_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/0830b7331848/41598_2024_66913_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/dbd3e3b87544/41598_2024_66913_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/268eb0765d00/41598_2024_66913_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/a5d37b30dca5/41598_2024_66913_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/d9af227d3c21/41598_2024_66913_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/a79a5a7381c9/41598_2024_66913_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/6b111ea123d5/41598_2024_66913_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/2fba84357b8a/41598_2024_66913_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/37d1a7dcfb40/41598_2024_66913_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/52795a396725/41598_2024_66913_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/54cedd89ef03/41598_2024_66913_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/d2f1a73adb81/41598_2024_66913_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/510711af4f8f/41598_2024_66913_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/f96000ca6991/41598_2024_66913_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/0830b7331848/41598_2024_66913_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/dbd3e3b87544/41598_2024_66913_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/268eb0765d00/41598_2024_66913_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/a5d37b30dca5/41598_2024_66913_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/d9af227d3c21/41598_2024_66913_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/a79a5a7381c9/41598_2024_66913_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/6b111ea123d5/41598_2024_66913_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/2fba84357b8a/41598_2024_66913_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2a78/11252289/37d1a7dcfb40/41598_2024_66913_Fig14_HTML.jpg

相似文献

1
LHFFNet: A hybrid feature fusion method for lane detection.LHFFNet:一种用于车道检测的混合特征融合方法。
Sci Rep. 2024 Jul 16;14(1):16353. doi: 10.1038/s41598-024-66913-1.
2
Proportional feature pyramid network based on weight fusion for lane detection.基于权重融合的比例特征金字塔网络用于车道检测。
PeerJ Comput Sci. 2024 Jan 29;10:e1824. doi: 10.7717/peerj-cs.1824. eCollection 2024.
3
A Fast and Robust Lane Detection via Online Re-Parameterization and Hybrid Attention.通过在线重新参数化和混合注意力实现快速且鲁棒的车道检测
Sensors (Basel). 2023 Oct 7;23(19):8285. doi: 10.3390/s23198285.
4
The geometric attention-aware network for lane detection in complex road scenes.用于复杂道路场景中车道检测的几何注意感知网络。
PLoS One. 2021 Jul 15;16(7):e0254521. doi: 10.1371/journal.pone.0254521. eCollection 2021.
5
Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection.结合低光场景增强的快速准确车道检测。
Sensors (Basel). 2023 May 19;23(10):4917. doi: 10.3390/s23104917.
6
Multi-Object Trajectory Prediction Based on Lane Information and Generative Adversarial Network.基于车道信息和生成对抗网络的多目标轨迹预测
Sensors (Basel). 2024 Feb 17;24(4):1280. doi: 10.3390/s24041280.
7
FF-HPINet: A Flipped Feature and Hierarchical Position Information Extraction Network for Lane Detection.FF-HPINet:一种用于车道检测的翻转特征与分层位置信息提取网络。
Sensors (Basel). 2024 May 29;24(11):3502. doi: 10.3390/s24113502.
8
Simultaneous vehicle and lane detection via MobileNetV3 in car following scene.基于 MobileNetV3 的车辆和车道同时检测技术在跟驰场景中的应用。
PLoS One. 2022 Mar 4;17(3):e0264551. doi: 10.1371/journal.pone.0264551. eCollection 2022.
9
Research on Lane Line Detection Algorithm Based on Instance Segmentation.基于实例分割的车道线检测算法研究。
Sensors (Basel). 2023 Jan 10;23(2):789. doi: 10.3390/s23020789.
10
An End-to-End Lane Detection Model with Attention and Residual Block.基于注意力机制和残差模块的端到端车道检测模型
Comput Intell Neurosci. 2022 Apr 13;2022:5852891. doi: 10.1155/2022/5852891. eCollection 2022.

引用本文的文献

1
A lightweight large receptive field network LrfSR for image super-resolution.一种用于图像超分辨率的轻量级大感受野网络LrfSR。
Sci Rep. 2025 Apr 11;15(1):12535. doi: 10.1038/s41598-025-96796-9.