• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度贝叶斯辅助的关键点检测在装配自动化中的位姿估计

Deep Bayesian-Assisted Keypoint Detection for Pose Estimation in Assembly Automation.

机构信息

Department of Electrical and Computer Engineering, University of California Davis, Davis, CA 95616, USA.

Greenfield Labs, Ford Motor Company, Palo Alto, CA 94304, USA.

出版信息

Sensors (Basel). 2023 Jul 2;23(13):6107. doi: 10.3390/s23136107.

DOI:10.3390/s23136107
PMID:37447956
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10346187/
Abstract

Pose estimation is crucial for automating assembly tasks, yet achieving sufficient accuracy for assembly automation remains challenging and part-specific. This paper presents a novel, streamlined approach to pose estimation that facilitates automation of assembly tasks. Our proposed method employs deep learning on a limited number of annotated images to identify a set of keypoints on the parts of interest. To compensate for network shortcomings and enhance accuracy we incorporated a Bayesian updating stage that leverages our detailed knowledge of the assembly part design. This Bayesian updating step refines the network output, significantly improving pose estimation accuracy. For this purpose, we utilized a subset of network-generated keypoint positions with higher quality as measurements, while for the remaining keypoints, the network outputs only serve as priors. The geometry data aid in constructing likelihood functions, which in turn result in enhanced posterior distributions of keypoint pixel positions. We then employed the maximum a posteriori (MAP) estimates of keypoint locations to obtain a final pose, allowing for an update to the nominal assembly trajectory. We evaluated our method on a 14-point snap-fit dash trim assembly for a Ford Mustang dashboard, demonstrating promising results. Our approach does not require tailoring to new applications, nor does it rely on extensive machine learning expertise or large amounts of training data. This makes our method a scalable and adaptable solution for the production floors.

摘要

姿态估计对于自动化装配任务至关重要,但要实现足够的装配自动化精度仍然具有挑战性且针对特定部件。本文提出了一种新颖的、简化的姿态估计方法,以促进装配任务的自动化。我们提出的方法在有限数量的标注图像上使用深度学习来识别感兴趣部件上的一组关键点。为了弥补网络的不足并提高准确性,我们结合了我们对装配部件设计的详细了解,引入了贝叶斯更新阶段。该贝叶斯更新步骤优化了网络输出,显著提高了姿态估计的准确性。为此,我们利用了具有更高质量的网络生成的关键点位置子集作为测量值,而对于其余的关键点,网络输出仅作为先验。几何数据有助于构建似然函数,进而生成优化后的关键点像素位置后验分布。然后,我们采用最大后验(MAP)估计来获取关键点的位置,从而获得最终的姿态,以实现名义装配轨迹的更新。我们在福特野马仪表板的 14 点快速卡扣式仪表饰条装配上评估了我们的方法,结果表明效果良好。我们的方法不需要针对新应用进行定制,也不需要依赖大量的机器学习专业知识或训练数据。这使得我们的方法成为生产车间的一种可扩展和适应性强的解决方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/8d24d61460fd/sensors-23-06107-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/602651b6ffcd/sensors-23-06107-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/4a352e9b9a30/sensors-23-06107-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/15a201758cf1/sensors-23-06107-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/88ba1a4589f3/sensors-23-06107-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/255203b54c7e/sensors-23-06107-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/f2be4cee337a/sensors-23-06107-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/0e553c172c06/sensors-23-06107-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/94a6aa888c52/sensors-23-06107-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/8d24d61460fd/sensors-23-06107-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/602651b6ffcd/sensors-23-06107-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/4a352e9b9a30/sensors-23-06107-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/15a201758cf1/sensors-23-06107-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/88ba1a4589f3/sensors-23-06107-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/255203b54c7e/sensors-23-06107-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/f2be4cee337a/sensors-23-06107-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/0e553c172c06/sensors-23-06107-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/94a6aa888c52/sensors-23-06107-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fe3/10346187/8d24d61460fd/sensors-23-06107-g008.jpg

相似文献

1
Deep Bayesian-Assisted Keypoint Detection for Pose Estimation in Assembly Automation.基于深度贝叶斯辅助的关键点检测在装配自动化中的位姿估计
Sensors (Basel). 2023 Jul 2;23(13):6107. doi: 10.3390/s23136107.
2
Spacecraft Homography Pose Estimation with Single-Stage Deep Convolutional Neural Network.基于单阶段深度卷积神经网络的航天器单应性姿态估计
Sensors (Basel). 2024 Mar 12;24(6):1828. doi: 10.3390/s24061828.
3
Estimating Ground Reaction Forces from Two-Dimensional Pose Data: A Biomechanics-Based Comparison of AlphaPose, BlazePose, and OpenPose.基于二维姿态数据估计地面反作用力:AlphaPose、BlazePose 和 OpenPose 的生物力学比较。
Sensors (Basel). 2022 Dec 21;23(1):78. doi: 10.3390/s23010078.
4
Manipulation Planning for Object Re-Orientation Based on Semantic Segmentation Keypoint Detection.基于语义分割关键点检测的物体重新定向操作规划
Sensors (Basel). 2021 Mar 24;21(7):2280. doi: 10.3390/s21072280.
5
DSPose: Dual-Space-Driven Keypoint Topology Modeling for Human Pose Estimation.DSPose:用于人体姿态估计的双空间驱动关键点拓扑建模。
Sensors (Basel). 2023 Sep 3;23(17):7626. doi: 10.3390/s23177626.
6
Accurate Robot Arm Attitude Estimation Based on Multi-View Images and Super-Resolution Keypoint Detection Networks.基于多视图图像和超分辨率关键点检测网络的精确机器人手臂姿态估计
Sensors (Basel). 2024 Jan 4;24(1):305. doi: 10.3390/s24010305.
7
Repeated Cross-Scale Structure-Induced Feature Fusion Network for 2D Hand Pose Estimation.用于二维手部姿态估计的重复跨尺度结构诱导特征融合网络
Entropy (Basel). 2023 Apr 27;25(5):724. doi: 10.3390/e25050724.
8
Detection, segmentation, and 3D pose estimation of surgical tools using convolutional neural networks and algebraic geometry.使用卷积神经网络和代数几何进行手术工具的检测、分割和三维姿态估计。
Med Image Anal. 2021 May;70:101994. doi: 10.1016/j.media.2021.101994. Epub 2021 Feb 7.
9
Multicow pose estimation based on keypoint extraction.基于关键点提取的多奶牛姿态估计。
PLoS One. 2022 Jun 3;17(6):e0269259. doi: 10.1371/journal.pone.0269259. eCollection 2022.
10
PCKRF: Point Cloud Completion and Keypoint Refinement With Fusion Data for 6D Pose Estimation.PCKRF:用于6D姿态估计的融合数据点云补全与关键点细化
IEEE Trans Vis Comput Graph. 2024 Apr 17;PP. doi: 10.1109/TVCG.2024.3390122.

本文引用的文献

1
Flexible Three-Dimensional Reconstruction via Structured-Light-based Visual Positioning and Global Optimization.基于结构光视觉定位和全局优化的灵活三维重建。
Sensors (Basel). 2019 Apr 1;19(7):1583. doi: 10.3390/s19071583.
2
Fast SIFT design for real-time visual feature extraction.快速 SIFT 设计用于实时视觉特征提取。
IEEE Trans Image Process. 2013 Aug;22(8):3158-67. doi: 10.1109/TIP.2013.2259841.