• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一个用于野外机器人的多模态地对空交叉视角姿态估计数据集。

A multi-modality ground-to-air cross-view pose estimation dataset for field robots.

作者信息

Yuan Xia, Wang Kaiyang, Qin Riyu, Xu Jiachen

机构信息

Nanjing University of Science and Technology, School of Computer Science and Engineering, Nanjing, 210094, China.

Dahua Technology, Software Development Department, Hangzhou, 310000, China.

出版信息

Sci Data. 2025 May 7;12(1):754. doi: 10.1038/s41597-025-05075-9.

DOI:10.1038/s41597-025-05075-9
PMID:40335529
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12059049/
Abstract

High-precision localization is critical for intelligent robotics in autonomous driving, smart agriculture, and military operations. While Global Navigation Satellite System (GNSS) provides global positioning, its reliability deteriorates severely in signal degraded environments like urban canyons. Cross-view pose estimation using aerial-ground sensor fusion offers an economical alternative, yet current datasets lack field scenarios and high-resolution LiDAR support.This work introduces a multimodal cross-view dataset addressing these gaps. It contains 29,940 synchronized frames across 11 operational environments (6 field environments, 5 urban roads), featuring: 1) 144-channel LiDAR point clouds, 2) ground-view RGB images, and 3) aerial orthophotos. Centimeter-accurate georeferencing is ensured through GNSS fusion and post-processed kinematic positioning. The dataset uniquely integrates field environments and high-resolution LiDAR-aerial-ground data triplets, enabling rigorous evaluation of 3-DoF pose estimation algorithms for orientation alignment and coordinate transformation between perspectives.This resource supports development of robust localization systems for field robots in GNSS-denied conditions, emphasizing cross-view feature matching and multisensor fusion. Light Detection And Ranging (LiDAR)-enhanced ground truth further distinguishes its utility for complex outdoor navigation research.

摘要

高精度定位对于自动驾驶、智能农业和军事行动中的智能机器人至关重要。虽然全球导航卫星系统(GNSS)提供全球定位,但在城市峡谷等信号退化环境中,其可靠性会严重下降。使用空地传感器融合的跨视角姿态估计提供了一种经济的替代方案,但目前的数据集缺乏实地场景和高分辨率激光雷达支持。这项工作引入了一个多模态跨视角数据集来弥补这些差距。它包含11个操作环境(6个野外环境、5条城市道路)中的29,940个同步帧,其特点包括:1)144通道激光雷达点云,2)地面视角RGB图像,以及3)航空正射影像。通过GNSS融合和后处理运动定位确保厘米级精确地理配准。该数据集独特地整合了野外环境和高分辨率激光雷达-空地数据三元组,能够对用于视角之间方向对齐和坐标转换的3自由度姿态估计算法进行严格评估。此资源支持在GNSS受限条件下为野外机器人开发强大的定位系统,强调跨视角特征匹配和多传感器融合。激光探测与测距(LiDAR)增强的地面真值进一步凸显了其在复杂户外导航研究中的效用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/93ffdcc502da/41597_2025_5075_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/20c0d71e2db4/41597_2025_5075_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/b6fbc3f821cb/41597_2025_5075_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/f2f88c692cde/41597_2025_5075_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/e541e6a2abc9/41597_2025_5075_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/8247133e5a1b/41597_2025_5075_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/6e28d12de36c/41597_2025_5075_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/4e438d1e27ba/41597_2025_5075_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/175bf2caef23/41597_2025_5075_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/ec242fa4a84e/41597_2025_5075_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/93ffdcc502da/41597_2025_5075_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/20c0d71e2db4/41597_2025_5075_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/b6fbc3f821cb/41597_2025_5075_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/f2f88c692cde/41597_2025_5075_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/e541e6a2abc9/41597_2025_5075_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/8247133e5a1b/41597_2025_5075_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/6e28d12de36c/41597_2025_5075_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/4e438d1e27ba/41597_2025_5075_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/175bf2caef23/41597_2025_5075_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/ec242fa4a84e/41597_2025_5075_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aa45/12059049/93ffdcc502da/41597_2025_5075_Fig10_HTML.jpg

相似文献

1
A multi-modality ground-to-air cross-view pose estimation dataset for field robots.一个用于野外机器人的多模态地对空交叉视角姿态估计数据集。
Sci Data. 2025 May 7;12(1):754. doi: 10.1038/s41597-025-05075-9.
2
On the precision of 6 DoF IMU-LiDAR based localization in GNSS-denied scenarios.基于6自由度惯性测量单元-激光雷达的定位在全球导航卫星系统受限场景下的精度
Front Robot AI. 2023 Jan 24;10:1064930. doi: 10.3389/frobt.2023.1064930. eCollection 2023.
3
MUN-FRL: A Visual-Inertial-LiDAR Dataset for Aerial Autonomous Navigation and Mapping.MUN-FRL:用于空中自主导航与测绘的视觉-惯性-激光雷达数据集。
Int J Rob Res. 2024 Oct;43(12):1853-1866. doi: 10.1177/02783649241238358. Epub 2024 Apr 16.
4
A GNSS/INS/LiDAR Integration Scheme for UAV-Based Navigation in GNSS-Challenging Environments.一种用于 GNSS 挑战性环境中基于无人机的导航的 GNSS/INS/LiDAR 集成方案。
Sensors (Basel). 2022 Dec 16;22(24):9908. doi: 10.3390/s22249908.
5
GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests.基于 GNSS/LiDAR 的稀疏林区空中机器人导航。
Sensors (Basel). 2019 Sep 20;19(19):4061. doi: 10.3390/s19194061.
6
Real-Time Onboard 3D State Estimation of an Unmanned Aerial Vehicle in Multi-Environments Using Multi-Sensor Data Fusion.基于多传感器数据融合的无人机在多环境中的实时机载三维状态估计
Sensors (Basel). 2020 Feb 9;20(3):919. doi: 10.3390/s20030919.
7
INS/LIDAR/Stereo SLAM Integration for Precision Navigation in GNSS-Denied Environments.用于全球导航卫星系统(GNSS)信号受限环境下精确导航的惯性导航系统(INS)/激光雷达/立体同步定位与地图构建(Stereo SLAM)集成
Sensors (Basel). 2023 Aug 25;23(17):7424. doi: 10.3390/s23177424.
8
A Vision/Inertial Navigation/Global Navigation Satellite Integrated System for Relative and Absolute Localization in Land Vehicles.一种用于陆地车辆相对和绝对定位的视觉/惯性导航/全球导航卫星集成系统。
Sensors (Basel). 2024 May 12;24(10):3079. doi: 10.3390/s24103079.
9
Advanced Monocular Outdoor Pose Estimation in Autonomous Systems: Leveraging Optical Flow, Depth Estimation, and Semantic Segmentation with Dynamic Object Removal.自主系统中的高级单目户外姿态估计:利用光流、深度估计和语义分割去除动态物体
Sensors (Basel). 2024 Dec 17;24(24):8040. doi: 10.3390/s24248040.
10
LiDAR-Based Sensor Fusion SLAM and Localization for Autonomous Driving Vehicles in Complex Scenarios.基于激光雷达的传感器融合SLAM技术及复杂场景下自动驾驶车辆的定位
J Imaging. 2023 Feb 20;9(2):52. doi: 10.3390/jimaging9020052.

本文引用的文献

1
Convolutional Cross-View Pose Estimation.卷积跨视图姿态估计
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):3813-3831. doi: 10.1109/TPAMI.2023.3346924. Epub 2024 Apr 3.
2
Accurate 3-DoF Camera Geo-Localization via Ground-to-Satellite Image Matching.基于地-星图像匹配的精确 3-DoF 相机地理定位。
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):2682-2697. doi: 10.1109/TPAMI.2022.3189702. Epub 2023 Feb 3.
3
Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization.
用于在户外距离传感器定位中将俯视图像用作地图的自监督学习。
Int J Rob Res. 2021 Dec;40(12-14):1488-1509. doi: 10.1177/02783649211045736. Epub 2021 Sep 28.
4
MonoSLAM: real-time single camera SLAM.单目即时定位与地图构建(MonoSLAM):实时单目相机即时定位与地图构建
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):1052-67. doi: 10.1109/TPAMI.2007.1049.