• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于Transformer的改进型无人驾驶目标检测方法

Improved object detection method for unmanned driving based on Transformers.

作者信息

Zhao Huaqi, Peng Xiang, Wang Su, Li Jun-Bao, Pan Jeng-Shyang, Su Xiaoguang, Liu Xiaomin

机构信息

The Heilongjiang Provincial Key Laboratory of Autonomous Intelligence and Information Processing, School of Information and Electronic Technology, Jiamusi University, Jiamusi, China.

Harbin Institute of Technology, Harbin, China.

出版信息

Front Neurorobot. 2024 May 1;18:1342126. doi: 10.3389/fnbot.2024.1342126. eCollection 2024.

DOI:10.3389/fnbot.2024.1342126
PMID:38752022
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11094364/
Abstract

The object detection method serves as the core technology within the unmanned driving perception module, extensively employed for detecting vehicles, pedestrians, traffic signs, and various objects. However, existing object detection methods still encounter three challenges in intricate unmanned driving scenarios: unsatisfactory performance in multi-scale object detection, inadequate accuracy in detecting small objects, and occurrences of false positives and missed detections in densely occluded environments. Therefore, this study proposes an improved object detection method for unmanned driving, leveraging Transformer architecture to address these challenges. First, a multi-scale Transformer feature extraction method integrated with channel attention is used to enhance the network's capability in extracting features across different scales. Second, a training method incorporating Query Denoising with Gaussian decay was employed to enhance the network's proficiency in learning representations of small objects. Third, a hybrid matching method combining Optimal Transport and Hungarian algorithms was used to facilitate the matching process between predicted and actual values, thereby enriching the network with more informative positive sample features. Experimental evaluations conducted on datasets including KITTI demonstrate that the proposed method achieves 3% higher mean Average Precision (mAP) than that of the existing methodologies.

摘要

目标检测方法是无人驾驶感知模块的核心技术,广泛应用于检测车辆、行人、交通标志及各种物体。然而,现有的目标检测方法在复杂的无人驾驶场景中仍面临三个挑战:多尺度目标检测性能不佳、小物体检测精度不足以及在密集遮挡环境中出现误报和漏检情况。因此,本研究提出一种改进的无人驾驶目标检测方法,利用Transformer架构来应对这些挑战。首先,采用一种集成通道注意力的多尺度Transformer特征提取方法,以增强网络跨不同尺度提取特征的能力。其次,采用一种结合高斯衰减的查询去噪训练方法,以提高网络学习小物体表征的熟练度。第三,使用一种结合最优传输和匈牙利算法的混合匹配方法,以促进预测值与实际值之间的匹配过程,从而为网络丰富更多信息丰富的正样本特征。在包括KITTI在内的数据集上进行的实验评估表明,所提出的方法比现有方法的平均精度均值(mAP)高出3%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/a4c80ba6a815/fnbot-18-1342126-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/657804d4a066/fnbot-18-1342126-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/d0d5dfe2144b/fnbot-18-1342126-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/39e742423e2f/fnbot-18-1342126-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/86ad3305b845/fnbot-18-1342126-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/0b2f81862459/fnbot-18-1342126-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/d165aa0feb1c/fnbot-18-1342126-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/710ac0ad1111/fnbot-18-1342126-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/3cb0d84d1da4/fnbot-18-1342126-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/69cfdf2aeaa5/fnbot-18-1342126-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/318ecdddd29f/fnbot-18-1342126-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/66d1891ab8c3/fnbot-18-1342126-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/3d31d591c588/fnbot-18-1342126-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/e2c85f723399/fnbot-18-1342126-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/1146739694a9/fnbot-18-1342126-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/5ff06da025ae/fnbot-18-1342126-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/6db760ebcc6d/fnbot-18-1342126-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/a4c80ba6a815/fnbot-18-1342126-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/657804d4a066/fnbot-18-1342126-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/d0d5dfe2144b/fnbot-18-1342126-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/39e742423e2f/fnbot-18-1342126-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/86ad3305b845/fnbot-18-1342126-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/0b2f81862459/fnbot-18-1342126-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/d165aa0feb1c/fnbot-18-1342126-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/710ac0ad1111/fnbot-18-1342126-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/3cb0d84d1da4/fnbot-18-1342126-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/69cfdf2aeaa5/fnbot-18-1342126-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/318ecdddd29f/fnbot-18-1342126-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/66d1891ab8c3/fnbot-18-1342126-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/3d31d591c588/fnbot-18-1342126-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/e2c85f723399/fnbot-18-1342126-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/1146739694a9/fnbot-18-1342126-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/5ff06da025ae/fnbot-18-1342126-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/6db760ebcc6d/fnbot-18-1342126-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b691/11094364/a4c80ba6a815/fnbot-18-1342126-g0017.jpg

相似文献

1
Improved object detection method for unmanned driving based on Transformers.基于Transformer的改进型无人驾驶目标检测方法
Front Neurorobot. 2024 May 1;18:1342126. doi: 10.3389/fnbot.2024.1342126. eCollection 2024.
2
SRE-YOLOv8: An Improved UAV Object Detection Model Utilizing Swin Transformer and RE-FPN.SRE-YOLOv8:一种利用Swin Transformer和RE-FPN的改进型无人机目标检测模型。
Sensors (Basel). 2024 Jun 17;24(12):3918. doi: 10.3390/s24123918.
3
Multi-Task Environmental Perception Methods for Autonomous Driving.用于自动驾驶的多任务环境感知方法
Sensors (Basel). 2024 Aug 28;24(17):5552. doi: 10.3390/s24175552.
4
Drone-DETR: Efficient Small Object Detection for Remote Sensing Image Using Enhanced RT-DETR Model.无人机DETR:使用增强型RT-DETR模型对遥感图像进行高效小目标检测
Sensors (Basel). 2024 Aug 24;24(17):5496. doi: 10.3390/s24175496.
5
3D Point Cloud Object Detection Method Based on Multi-Scale Dynamic Sparse Voxelization.基于多尺度动态稀疏体素化的三维点云目标检测方法
Sensors (Basel). 2024 Mar 11;24(6):1804. doi: 10.3390/s24061804.
6
An object detection algorithm combining self-attention and YOLOv4 in traffic scene.一种结合自注意力机制和 YOLOv4 的交通场景目标检测算法。
PLoS One. 2023 May 18;18(5):e0285654. doi: 10.1371/journal.pone.0285654. eCollection 2023.
7
3D Object Detection Based on Attention and Multi-Scale Feature Fusion.基于注意力和多尺度特征融合的三维目标检测。
Sensors (Basel). 2022 May 23;22(10):3935. doi: 10.3390/s22103935.
8
Enhanced Lightweight YOLOX for Small Object Wildfire Detection in UAV Imagery.用于无人机图像中小目标野火检测的增强型轻量级YOLOX
Sensors (Basel). 2024 Apr 24;24(9):2710. doi: 10.3390/s24092710.
9
Fast and accurate object detector for autonomous driving based on improved YOLOv5.基于改进YOLOv5的快速准确的自动驾驶目标检测器。
Sci Rep. 2023 Jun 15;13(1):9711. doi: 10.1038/s41598-023-36868-w.
10
HRYNet: A Highly Robust YOLO Network for Complex Road Traffic Object Detection.HRYNet:一种用于复杂道路交通目标检测的高度鲁棒的YOLO网络。
Sensors (Basel). 2024 Jan 19;24(2):642. doi: 10.3390/s24020642.

引用本文的文献

1
Context-Aware Enhanced Feature Refinement for small object detection with Deformable DETR.基于可变形DETR的上下文感知增强特征细化用于小目标检测
Front Neurorobot. 2025 Jun 10;19:1588565. doi: 10.3389/fnbot.2025.1588565. eCollection 2025.

本文引用的文献

1
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.