• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ACNet:一种用于自动驾驶的基于传感器衍生数据集的注意力-卷积协作语义分割网络。

ACNet: An Attention-Convolution Collaborative Semantic Segmentation Network on Sensor-Derived Datasets for Autonomous Driving.

作者信息

Zhang Qiliang, Hua Kaiwen, Zhang Zi, Zhao Yiwei, Chen Pengpeng

机构信息

School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China.

Mine Digitization Engineering Research Center of the Ministry of Education, Xuzhou 221116, China.

出版信息

Sensors (Basel). 2025 Aug 3;25(15):4776. doi: 10.3390/s25154776.

DOI:10.3390/s25154776
PMID:40807940
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12349026/
Abstract

In intelligent vehicular networks, the accuracy of semantic segmentation in road scenes is crucial for vehicle-mounted artificial intelligence to achieve environmental perception, decision support, and safety control. Although deep learning methods have made significant progress, two main challenges remain: first, the difficulty in balancing global and local features leads to blurred object boundaries and misclassification; second, conventional convolutions have limited ability to perceive irregular objects, causing information loss and affecting segmentation accuracy. To address these issues, this paper proposes a global-local collaborative attention module and a spider web convolution module. The former enhances feature representation through bidirectional feature interaction and dynamic weight allocation, reducing false positives and missed detections. The latter introduces an asymmetric sampling topology and six-directional receptive field paths to effectively improve the recognition of irregular objects. Experiments on the Cityscapes, CamVid, and BDD100K datasets, collected using vehicle-mounted cameras, demonstrate that the proposed method performs excellently across multiple evaluation metrics, including mIoU, mRecall, mPrecision, and mAccuracy. Comparative experiments with classical segmentation networks, attention mechanisms, and convolution modules validate the effectiveness of the proposed approach. The proposed method demonstrates outstanding performance in sensor-based semantic segmentation tasks and is well-suited for environmental perception systems in autonomous driving.

摘要

在智能车载网络中,道路场景语义分割的准确性对于车载人工智能实现环境感知、决策支持和安全控制至关重要。尽管深度学习方法取得了显著进展,但仍存在两个主要挑战:第一,平衡全局和局部特征的困难导致物体边界模糊和分类错误;第二,传统卷积感知不规则物体的能力有限,导致信息丢失并影响分割精度。为了解决这些问题,本文提出了一种全局-局部协作注意力模块和一种蛛网卷积模块。前者通过双向特征交互和动态权重分配增强特征表示,减少误报和漏检。后者引入了非对称采样拓扑和六向感受野路径,有效地提高了对不规则物体的识别能力。使用车载摄像头收集的Cityscapes、CamVid和BDD100K数据集上的实验表明,该方法在包括mIoU、mRecall、mPrecision和mAccuracy在内的多个评估指标上表现出色。与经典分割网络、注意力机制和卷积模块的对比实验验证了所提方法的有效性。所提方法在基于传感器的语义分割任务中表现出色,非常适合自动驾驶中的环境感知系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/3171ed677831/sensors-25-04776-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/d570a4862f68/sensors-25-04776-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/0db276934945/sensors-25-04776-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/baaa6e811215/sensors-25-04776-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/e353f856045c/sensors-25-04776-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/fabdce7df1be/sensors-25-04776-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/659e0d6303c2/sensors-25-04776-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/8c1a3bab367e/sensors-25-04776-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/65c2bd0f7cfc/sensors-25-04776-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/c44a5fa7b57a/sensors-25-04776-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/f717c729e900/sensors-25-04776-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/6655eb2de410/sensors-25-04776-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/ac153256f1b6/sensors-25-04776-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/8e941fbad83b/sensors-25-04776-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/4d7aaf816d4c/sensors-25-04776-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/63d64a580212/sensors-25-04776-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/3171ed677831/sensors-25-04776-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/d570a4862f68/sensors-25-04776-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/0db276934945/sensors-25-04776-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/baaa6e811215/sensors-25-04776-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/e353f856045c/sensors-25-04776-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/fabdce7df1be/sensors-25-04776-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/659e0d6303c2/sensors-25-04776-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/8c1a3bab367e/sensors-25-04776-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/65c2bd0f7cfc/sensors-25-04776-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/c44a5fa7b57a/sensors-25-04776-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/f717c729e900/sensors-25-04776-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/6655eb2de410/sensors-25-04776-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/ac153256f1b6/sensors-25-04776-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/8e941fbad83b/sensors-25-04776-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/4d7aaf816d4c/sensors-25-04776-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/63d64a580212/sensors-25-04776-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b2d/12349026/3171ed677831/sensors-25-04776-g016.jpg

相似文献

1
ACNet: An Attention-Convolution Collaborative Semantic Segmentation Network on Sensor-Derived Datasets for Autonomous Driving.ACNet:一种用于自动驾驶的基于传感器衍生数据集的注意力-卷积协作语义分割网络。
Sensors (Basel). 2025 Aug 3;25(15):4776. doi: 10.3390/s25154776.
2
Integrated neural network framework for multi-object detection and recognition using UAV imagery.用于使用无人机图像进行多目标检测与识别的集成神经网络框架。
Front Neurorobot. 2025 Jul 30;19:1643011. doi: 10.3389/fnbot.2025.1643011. eCollection 2025.
3
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.多级通道空间注意力与轻量级尺度融合网络(MCSLF-Net):用于3D脑肿瘤分割的多级通道空间注意力与轻量级尺度融合变换器
Quant Imaging Med Surg. 2025 Jul 1;15(7):6301-6325. doi: 10.21037/qims-2025-354. Epub 2025 Jun 30.
4
A novel image segmentation network with multi-scale and flow-guided attention for early screening of vaginal intraepithelial neoplasia (VAIN).一种用于阴道上皮内瘤变(VAIN)早期筛查的具有多尺度和流引导注意力的新型图像分割网络。
Med Phys. 2025 Aug;52(8):e18041. doi: 10.1002/mp.18041.
5
DGCFNet: Dual Global Context Fusion Network for remote sensing image semantic segmentation.DGCFNet:用于遥感图像语义分割的双全局上下文融合网络
PeerJ Comput Sci. 2025 Mar 27;11:e2786. doi: 10.7717/peerj-cs.2786. eCollection 2025.
6
Short-Term Memory Impairment短期记忆障碍
7
Steel surface defect segmentation with SME-DeeplabV3.基于SME-DeeplabV3的钢表面缺陷分割
PLoS One. 2025 Aug 14;20(8):e0329628. doi: 10.1371/journal.pone.0329628. eCollection 2025.
8
Static-dynamic class-level perception consistency in video semantic segmentation.视频语义分割中的静态-动态类级感知一致性
Neural Netw. 2025 Aug 7;192:107953. doi: 10.1016/j.neunet.2025.107953.
9
LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving.LEAD-YOLO:一种用于自动驾驶中小目标检测的轻量级且精确的网络。
Sensors (Basel). 2025 Aug 4;25(15):4800. doi: 10.3390/s25154800.
10
SAM for Road Object Segmentation: Promising but Challenging.用于道路目标分割的SAM:前景光明但颇具挑战。
J Imaging. 2025 Jun 10;11(6):189. doi: 10.3390/jimaging11060189.

本文引用的文献

1
BASeg: Boundary aware semantic segmentation for autonomous driving.BASeg:用于自动驾驶的边界感知语义分割。
Neural Netw. 2023 Jan;157:460-470. doi: 10.1016/j.neunet.2022.10.034. Epub 2022 Nov 9.
2
Intelligent Semantic Segmentation for Self-Driving Vehicles Using Deep Learning.基于深度学习的自动驾驶车辆智能语义分割。
Comput Intell Neurosci. 2022 Jan 17;2022:6390260. doi: 10.1155/2022/6390260. eCollection 2022.
3
An improved Deeplabv3+ semantic segmentation algorithm with multiple loss constraints.一种具有多重损失约束的改进型 Deeplabv3+ 语义分割算法。
PLoS One. 2022 Jan 19;17(1):e0261582. doi: 10.1371/journal.pone.0261582. eCollection 2022.
4
CGNet: A Light-Weight Context Guided Network for Semantic Segmentation.CGNet:用于语义分割的轻量级上下文引导网络
IEEE Trans Image Process. 2021;30:1169-1179. doi: 10.1109/TIP.2020.3042065. Epub 2020 Dec 17.
5
UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation.UNet++:重新设计跳过连接以利用图像分割中的多尺度特征。
IEEE Trans Med Imaging. 2020 Jun;39(6):1856-1867. doi: 10.1109/TMI.2019.2959609. Epub 2019 Dec 13.
6
Exploring Context with Deep Structured Models for Semantic Segmentation.深度学习模型在语义分割中的语境探索。
IEEE Trans Pattern Anal Mach Intell. 2018 Jun;40(6):1352-1366. doi: 10.1109/TPAMI.2017.2708714. Epub 2017 May 26.
7
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.