• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于通用自动驾驶的、基于语义高清地图的以物体为中心的分层姿态估计方法。

An Object-Centric Hierarchical Pose Estimation Method Using Semantic High-Definition Maps for General Autonomous Driving.

作者信息

Pyo Jeong-Won, Choi Jun-Hyeon, Kuc Tae-Yong

机构信息

Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea.

出版信息

Sensors (Basel). 2024 Aug 11;24(16):5191. doi: 10.3390/s24165191.

DOI:10.3390/s24165191
PMID:39204886
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11359054/
Abstract

To achieve Level 4 and above autonomous driving, a robust and stable autonomous driving system is essential to adapt to various environmental changes. This paper aims to perform vehicle pose estimation, a crucial element in forming autonomous driving systems, more universally and robustly. The prevalent method for vehicle pose estimation in autonomous driving systems relies on Real-Time Kinematic (RTK) sensor data, ensuring accurate location acquisition. However, due to the characteristics of RTK sensors, precise positioning is challenging or impossible in indoor spaces or areas with signal interference, leading to inaccurate pose estimation and hindering autonomous driving in such scenarios. This paper proposes a method to overcome these challenges by leveraging objects registered in a high-precision map. The proposed approach involves creating a semantic high-definition (HD) map with added objects, forming object-centric features, recognizing locations using these features, and accurately estimating the vehicle's pose from the recognized location. This proposed method enhances the precision of vehicle pose estimation in environments where acquiring RTK sensor data is challenging, enabling more robust and stable autonomous driving. The paper demonstrates the proposed method's effectiveness through simulation and real-world experiments, showcasing its capability for more precise pose estimation.

摘要

为了实现4级及以上的自动驾驶,一个强大且稳定的自动驾驶系统对于适应各种环境变化至关重要。本文旨在更普遍、更稳健地执行车辆姿态估计,这是构成自动驾驶系统的关键要素。自动驾驶系统中用于车辆姿态估计的普遍方法依赖于实时动态(RTK)传感器数据,以确保准确获取位置。然而,由于RTK传感器的特性,在室内空间或信号干扰区域进行精确定位具有挑战性或无法实现,这会导致姿态估计不准确,并在这种场景下阻碍自动驾驶。本文提出了一种通过利用高精度地图中注册的物体来克服这些挑战的方法。所提出的方法包括创建一个添加了物体的语义高清(HD)地图,形成以物体为中心的特征,使用这些特征识别位置,并从识别出的位置准确估计车辆的姿态。该方法提高了在获取RTK传感器数据具有挑战性的环境中车辆姿态估计的精度,从而实现更稳健、更稳定的自动驾驶。本文通过模拟和实际实验证明了所提出方法的有效性,展示了其进行更精确姿态估计的能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/86552e779d51/sensors-24-05191-g034.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/fe539be333e7/sensors-24-05191-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/c39e0dec82c9/sensors-24-05191-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/02eb4e5d68d2/sensors-24-05191-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/b026a4cd4f49/sensors-24-05191-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/376e8df2da25/sensors-24-05191-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/8718302ecf12/sensors-24-05191-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/d9a50e6cd53f/sensors-24-05191-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/aa86eaad0c3a/sensors-24-05191-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/17b7ce860904/sensors-24-05191-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/d9cd12c48d7f/sensors-24-05191-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/4ca2d8d46243/sensors-24-05191-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/1c3ee09c9e55/sensors-24-05191-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/eb9145587617/sensors-24-05191-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/7cc2c2b23120/sensors-24-05191-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/34e36174692d/sensors-24-05191-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/43f9771a3525/sensors-24-05191-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/eb4270e7ac0c/sensors-24-05191-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/fd2259206f3c/sensors-24-05191-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e4bb93c9199a/sensors-24-05191-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/438b02e59ea4/sensors-24-05191-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e623a96cc139/sensors-24-05191-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/85bdd4a35a5f/sensors-24-05191-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e57cf87ef364/sensors-24-05191-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/cbae62141896/sensors-24-05191-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/acdc504cdb34/sensors-24-05191-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/3a172d823b49/sensors-24-05191-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e540f8810d75/sensors-24-05191-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/893e73234562/sensors-24-05191-g028.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/bea9c2d9472c/sensors-24-05191-g029.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/ba3f626c6baa/sensors-24-05191-g030.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/020f22b249fa/sensors-24-05191-g031.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/8e3625d7768f/sensors-24-05191-g032.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/ad35ed08b0ce/sensors-24-05191-g033.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/86552e779d51/sensors-24-05191-g034.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/fe539be333e7/sensors-24-05191-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/c39e0dec82c9/sensors-24-05191-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/02eb4e5d68d2/sensors-24-05191-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/b026a4cd4f49/sensors-24-05191-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/376e8df2da25/sensors-24-05191-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/8718302ecf12/sensors-24-05191-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/d9a50e6cd53f/sensors-24-05191-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/aa86eaad0c3a/sensors-24-05191-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/17b7ce860904/sensors-24-05191-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/d9cd12c48d7f/sensors-24-05191-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/4ca2d8d46243/sensors-24-05191-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/1c3ee09c9e55/sensors-24-05191-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/eb9145587617/sensors-24-05191-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/7cc2c2b23120/sensors-24-05191-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/34e36174692d/sensors-24-05191-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/43f9771a3525/sensors-24-05191-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/eb4270e7ac0c/sensors-24-05191-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/fd2259206f3c/sensors-24-05191-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e4bb93c9199a/sensors-24-05191-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/438b02e59ea4/sensors-24-05191-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e623a96cc139/sensors-24-05191-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/85bdd4a35a5f/sensors-24-05191-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e57cf87ef364/sensors-24-05191-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/cbae62141896/sensors-24-05191-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/acdc504cdb34/sensors-24-05191-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/3a172d823b49/sensors-24-05191-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/e540f8810d75/sensors-24-05191-g027.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/893e73234562/sensors-24-05191-g028.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/bea9c2d9472c/sensors-24-05191-g029.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/ba3f626c6baa/sensors-24-05191-g030.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/020f22b249fa/sensors-24-05191-g031.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/8e3625d7768f/sensors-24-05191-g032.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/ad35ed08b0ce/sensors-24-05191-g033.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8356/11359054/86552e779d51/sensors-24-05191-g034.jpg

相似文献

1
An Object-Centric Hierarchical Pose Estimation Method Using Semantic High-Definition Maps for General Autonomous Driving.一种用于通用自动驾驶的、基于语义高清地图的以物体为中心的分层姿态估计方法。
Sensors (Basel). 2024 Aug 11;24(16):5191. doi: 10.3390/s24165191.
2
Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices.基于多事件的自动驾驶车辆智能传感器与设备仿真场景生成方法。
Sensors (Basel). 2019 Oct 14;19(20):4456. doi: 10.3390/s19204456.
3
Development of a Moving Baseline RTK/Motion Sensor-Integrated Positioning-Based Autonomous Driving Algorithm for a Speed Sprayer.一种基于移动基准 RTK/运动传感器集成的自主驾驶算法的发展,用于速度喷雾器。
Sensors (Basel). 2022 Dec 15;22(24):9881. doi: 10.3390/s22249881.
4
Information System Model and Key Technologies of High-Definition Maps in Autonomous Driving Scenarios.自动驾驶场景下高清地图信息系统模型与关键技术
Sensors (Basel). 2024 Jun 25;24(13):4115. doi: 10.3390/s24134115.
5
Investigating the Improvement of Autonomous Vehicle Performance through the Integration of Multi-Sensor Dynamic Mapping Techniques.研究通过多传感器动态映射技术的集成来提高自动驾驶汽车的性能。
Sensors (Basel). 2023 Feb 21;23(5):2369. doi: 10.3390/s23052369.
6
Radar sensor based machine learning approach for precise vehicle position estimation.基于雷达传感器的机器学习方法用于精确车辆位置估计。
Sci Rep. 2023 Aug 24;13(1):13837. doi: 10.1038/s41598-023-40961-5.
7
Pose Prediction of Autonomous Full Tracked Vehicle Based on 3D Sensor.基于 3D 传感器的自主全履带车辆位姿预测
Sensors (Basel). 2019 Nov 22;19(23):5120. doi: 10.3390/s19235120.
8
LiDAR-Based Sensor Fusion SLAM and Localization for Autonomous Driving Vehicles in Complex Scenarios.基于激光雷达的传感器融合SLAM技术及复杂场景下自动驾驶车辆的定位
J Imaging. 2023 Feb 20;9(2):52. doi: 10.3390/jimaging9020052.
9
Development of an Autonomous Driving Vehicle for Garbage Collection in Residential Areas.自主驾驶车辆在居民区的垃圾收集应用开发。
Sensors (Basel). 2022 Nov 23;22(23):9094. doi: 10.3390/s22239094.
10
Visual Semantic Landmark-Based Robust Mapping and Localization for Autonomous Indoor Parking.基于视觉语义地标物的自主室内停车稳健映射与定位。
Sensors (Basel). 2019 Jan 4;19(1):161. doi: 10.3390/s19010161.

引用本文的文献

1
TOSD: A Hierarchical Object-Centric Descriptor Integrating Shape, Color, and Topology.TOSD:一种集成形状、颜色和拓扑结构的分层对象中心描述符。
Sensors (Basel). 2025 Jul 25;25(15):4614. doi: 10.3390/s25154614.

本文引用的文献

1
End-to-End Autonomous Driving: Challenges and Frontiers.端到端自动驾驶:挑战与前沿
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):10164-10183. doi: 10.1109/TPAMI.2024.3435937. Epub 2024 Nov 6.
2
Development of an Autonomous Driving Vehicle for Garbage Collection in Residential Areas.自主驾驶车辆在居民区的垃圾收集应用开发。
Sensors (Basel). 2022 Nov 23;22(23):9094. doi: 10.3390/s22239094.
3
Enhanced Perception for Autonomous Driving Using Semantic and Geometric Data Fusion.利用语义和几何数据融合实现自动驾驶的增强感知。
Sensors (Basel). 2022 Jul 5;22(13):5061. doi: 10.3390/s22135061.
4
Adaptive Real-Time Object Detection for Autonomous Driving Systems.用于自动驾驶系统的自适应实时目标检测
J Imaging. 2022 Apr 11;8(4):106. doi: 10.3390/jimaging8040106.
5
Authorized Traffic Controller Hand Gesture Recognition for Situation-Aware Autonomous Driving.授权交通控制器手势识别用于情境感知自动驾驶。
Sensors (Basel). 2021 Nov 27;21(23):7914. doi: 10.3390/s21237914.
6
Integration of GPS, Monocular Vision, and High Definition (HD) Map for Accurate Vehicle Localization.GPS、单目视觉和高清地图的集成,实现车辆的精确定位。
Sensors (Basel). 2018 Sep 28;18(10):3270. doi: 10.3390/s18103270.
7
Integration of Low-Cost GNSS and Monocular Cameras for Simultaneous Localization and Mapping.低成本 GNSS 和单目相机的集成,用于同时定位与建图。
Sensors (Basel). 2018 Jul 7;18(7):2193. doi: 10.3390/s18072193.
8
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition.NetVLAD:用于弱监督场景识别的卷积神经网络架构。
IEEE Trans Pattern Anal Mach Intell. 2018 Jun;40(6):1437-1451. doi: 10.1109/TPAMI.2017.2711011. Epub 2017 Jun 1.
9
GPS/DR Error Estimation for Autonomous Vehicle Localization.用于自动驾驶车辆定位的GPS/DR误差估计
Sensors (Basel). 2015 Aug 21;15(8):20779-98. doi: 10.3390/s150820779.
10
Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments.单目相机/惯性测量单元/全球导航卫星系统集成在挑战性的全球导航卫星系统环境中用于地面车辆导航。
Sensors (Basel). 2012;12(3):3162-85. doi: 10.3390/s120303162. Epub 2012 Mar 7.