• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于单阶段深度卷积神经网络的航天器单应性姿态估计

Spacecraft Homography Pose Estimation with Single-Stage Deep Convolutional Neural Network.

作者信息

Chen Shengpeng, Yang Wenyi, Wang Wei, Mai Jianting, Liang Jian, Zhang Xiaohu

机构信息

School of Aeronautics and Astronautics, Sun Yat-sen University, Shenzhen 510275, China.

出版信息

Sensors (Basel). 2024 Mar 12;24(6):1828. doi: 10.3390/s24061828.

DOI:10.3390/s24061828
PMID:38544091
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10974870/
Abstract

Spacecraft pose estimation using computer vision has garnered increasing attention in research areas such as automation system theory, control theory, sensors and instruments, robot technology, and automation software. Confronted with the extreme environment of space, existing spacecraft pose estimation methods are predominantly multi-stage networks with complex operations. In this study, we propose an approach for spacecraft homography pose estimation with a single-stage deep convolutional neural network for the first time. We formulated a homomorphic geometric constraint equation for spacecraft with planar features. Additionally, we employed a single-stage 2D keypoint regression network to obtain homography 2D keypoint coordinates for spacecraft. After decomposition to obtain the rough spacecraft pose based on the homography matrix constructed according to the geometric constraint equation, a loss function based on pixel errors was employed to refine the spacecraft pose. We conducted extensive experiments using widely used spacecraft pose estimation datasets and compared our method with state-of-the-art techniques in the field to demonstrate its effectiveness.

摘要

利用计算机视觉进行航天器姿态估计在自动化系统理论、控制理论、传感器与仪器、机器人技术以及自动化软件等研究领域受到了越来越多的关注。面对太空的极端环境,现有的航天器姿态估计方法主要是具有复杂操作的多阶段网络。在本研究中,我们首次提出了一种使用单阶段深度卷积神经网络进行航天器单应性姿态估计的方法。我们为具有平面特征的航天器制定了一个同态几何约束方程。此外,我们采用了单阶段二维关键点回归网络来获取航天器的单应性二维关键点坐标。在根据几何约束方程构建的单应性矩阵分解以获得航天器的粗略姿态后,采用基于像素误差的损失函数来细化航天器姿态。我们使用广泛使用的航天器姿态估计数据集进行了大量实验,并将我们的方法与该领域的最新技术进行了比较,以证明其有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/5cc9d0ef89ae/sensors-24-01828-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/508742a833bf/sensors-24-01828-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/2f922917de21/sensors-24-01828-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/324870f3acc2/sensors-24-01828-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/bc4486821e57/sensors-24-01828-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/9af8a6c656a3/sensors-24-01828-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/c4c926f4864a/sensors-24-01828-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/05b1fa55fc82/sensors-24-01828-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/ec5d76216b1b/sensors-24-01828-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/8aad3e16de72/sensors-24-01828-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/5cc9d0ef89ae/sensors-24-01828-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/508742a833bf/sensors-24-01828-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/2f922917de21/sensors-24-01828-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/324870f3acc2/sensors-24-01828-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/bc4486821e57/sensors-24-01828-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/9af8a6c656a3/sensors-24-01828-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/c4c926f4864a/sensors-24-01828-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/05b1fa55fc82/sensors-24-01828-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/ec5d76216b1b/sensors-24-01828-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/8aad3e16de72/sensors-24-01828-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b1b3/10974870/5cc9d0ef89ae/sensors-24-01828-g010.jpg

相似文献

1
Spacecraft Homography Pose Estimation with Single-Stage Deep Convolutional Neural Network.基于单阶段深度卷积神经网络的航天器单应性姿态估计
Sensors (Basel). 2024 Mar 12;24(6):1828. doi: 10.3390/s24061828.
2
Deep Bayesian-Assisted Keypoint Detection for Pose Estimation in Assembly Automation.基于深度贝叶斯辅助的关键点检测在装配自动化中的位姿估计
Sensors (Basel). 2023 Jul 2;23(13):6107. doi: 10.3390/s23136107.
3
Head Pose Estimation through Keypoints Matching between Reconstructed 3D Face Model and 2D Image.基于重建 3D 人脸模型和 2D 图像关键点匹配的头部姿势估计
Sensors (Basel). 2021 Mar 6;21(5):1841. doi: 10.3390/s21051841.
4
DSPose: Dual-Space-Driven Keypoint Topology Modeling for Human Pose Estimation.DSPose:用于人体姿态估计的双空间驱动关键点拓扑建模。
Sensors (Basel). 2023 Sep 3;23(17):7626. doi: 10.3390/s23177626.
5
Optimization-based non-cooperative spacecraft pose estimation using stereo cameras during proximity operations.基于优化的近距离操作期间使用立体相机的非合作航天器姿态估计
Appl Opt. 2017 May 20;56(15):4522-4531. doi: 10.1364/AO.56.004522.
6
Detection, segmentation, and 3D pose estimation of surgical tools using convolutional neural networks and algebraic geometry.使用卷积神经网络和代数几何进行手术工具的检测、分割和三维姿态估计。
Med Image Anal. 2021 May;70:101994. doi: 10.1016/j.media.2021.101994. Epub 2021 Feb 7.
7
SU-Net: pose estimation network for non-cooperative spacecraft on-orbit.SU-Net:非合作航天器在轨姿态估计网络。
Sci Rep. 2023 Jul 21;13(1):11780. doi: 10.1038/s41598-023-38974-1.
8
Repeated Cross-Scale Structure-Induced Feature Fusion Network for 2D Hand Pose Estimation.用于二维手部姿态估计的重复跨尺度结构诱导特征融合网络
Entropy (Basel). 2023 Apr 27;25(5):724. doi: 10.3390/e25050724.
9
Keypoint-Based Disentangled Pose Network for Category-Level 6-D Object Pose Tracking.基于关键点的解缠姿势网络的类别级 6-D 对象姿势跟踪。
IEEE Comput Graph Appl. 2022 Sep-Oct;42(5):28-36. doi: 10.1109/MCG.2021.3114181. Epub 2022 Oct 4.
10
Unsupervised Global and Local Homography Estimation With Motion Basis Learning.无监督全局和局部运动基学习的单应性估计。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7885-7899. doi: 10.1109/TPAMI.2022.3223789. Epub 2023 May 5.

本文引用的文献

1
A model-based 3D template matching technique for pose acquisition of an uncooperative space object.一种基于模型的用于获取非合作空间物体姿态的三维模板匹配技术。
Sensors (Basel). 2015 Mar 16;15(3):6360-82. doi: 10.3390/s150306360.
2
A Robust O(n) Solution to the Perspective-n-Point Problem.一种鲁棒的 O(n) 视角 n 点问题解决方案。
IEEE Trans Pattern Anal Mach Intell. 2012 Jul;34(7):1444-50. doi: 10.1109/TPAMI.2012.41. Epub 2012 Jan 31.