• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

图形拟合与稀疏深度学习模型在机器人位姿估计中的比较。

Comparison of Graph Fitting and Sparse Deep Learning Model for Robot Pose Estimation.

机构信息

Faculty of Computer Science and Information Technology, West Pomeranian University of Technology in Szczecin, ul. Żołnierska 49, 71-210 Szczecin, Poland.

出版信息

Sensors (Basel). 2022 Aug 29;22(17):6518. doi: 10.3390/s22176518.

DOI:10.3390/s22176518
PMID:36080976
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9460051/
Abstract

The paper presents a simple, yet robust computer vision system for robot arm tracking with the use of RGB-D cameras. Tracking means to measure in real time the robot state given by three angles and with known restrictions about the robot geometry. The tracking system consists of two parts: image preprocessing and machine learning. In the machine learning part, we compare two approaches: fitting the robot pose to the point cloud and fitting the convolutional neural network model to the sparse 3D depth images. The advantage of the presented approach is direct use of the point cloud transformed to the sparse image in the network input and use of sparse convolutional and pooling layers (sparse CNN). The experiments confirm that the robot tracking is performed in real time and with an accuracy comparable to the accuracy of the depth sensor.

摘要

本文提出了一种简单而强大的机器人手臂跟踪计算机视觉系统,该系统使用 RGB-D 相机。跟踪是指实时测量机器人的状态,给出三个角度,并已知机器人几何形状的限制。跟踪系统由两部分组成:图像预处理和机器学习。在机器学习部分,我们比较了两种方法:将机器人姿态拟合到点云中,以及将卷积神经网络模型拟合到稀疏 3D 深度图像中。所提出方法的优点是直接在网络输入中使用转换为稀疏图像的点云,并使用稀疏卷积和池化层(稀疏 CNN)。实验证实,机器人跟踪是实时进行的,并且精度可与深度传感器的精度相媲美。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/92d96a3cc067/sensors-22-06518-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/8fef87ab51d1/sensors-22-06518-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/a84645c07b6e/sensors-22-06518-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/b5915cc6d327/sensors-22-06518-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/2ff3ae55a706/sensors-22-06518-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/cf1c35d51402/sensors-22-06518-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/dd3d2a21277a/sensors-22-06518-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/482f80d25c88/sensors-22-06518-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/671abc020625/sensors-22-06518-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/b1ebfe97ac7f/sensors-22-06518-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/f36593e59bbd/sensors-22-06518-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/7f70e343ff53/sensors-22-06518-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/652f31fc280a/sensors-22-06518-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/b5e978f641ec/sensors-22-06518-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/844fc4023e52/sensors-22-06518-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/3325c4441afa/sensors-22-06518-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/92d96a3cc067/sensors-22-06518-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/8fef87ab51d1/sensors-22-06518-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/a84645c07b6e/sensors-22-06518-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/b5915cc6d327/sensors-22-06518-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/2ff3ae55a706/sensors-22-06518-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/cf1c35d51402/sensors-22-06518-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/dd3d2a21277a/sensors-22-06518-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/482f80d25c88/sensors-22-06518-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/671abc020625/sensors-22-06518-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/b1ebfe97ac7f/sensors-22-06518-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/f36593e59bbd/sensors-22-06518-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/7f70e343ff53/sensors-22-06518-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/652f31fc280a/sensors-22-06518-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/b5e978f641ec/sensors-22-06518-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/844fc4023e52/sensors-22-06518-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/3325c4441afa/sensors-22-06518-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c7f4/9460051/92d96a3cc067/sensors-22-06518-g016.jpg

相似文献

1
Comparison of Graph Fitting and Sparse Deep Learning Model for Robot Pose Estimation.图形拟合与稀疏深度学习模型在机器人位姿估计中的比较。
Sensors (Basel). 2022 Aug 29;22(17):6518. doi: 10.3390/s22176518.
2
A deep learning approach for pose estimation from volumetric OCT data.基于深度学习的体 OCT 数据的姿态估计方法。
Med Image Anal. 2018 May;46:162-179. doi: 10.1016/j.media.2018.03.002. Epub 2018 Mar 10.
3
Facial Expressions Recognition for Human-Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer.基于修正 Adam 优化器的深度卷积神经网络的人机交互中的面部表情识别。
Sensors (Basel). 2020 Apr 23;20(8):2393. doi: 10.3390/s20082393.
4
Image-based laparoscopic tool detection and tracking using convolutional neural networks: a review of the literature.基于图像的腹腔镜工具检测与跟踪的卷积神经网络方法:文献综述。
Comput Assist Surg (Abingdon). 2020 Dec;25(1):15-28. doi: 10.1080/24699322.2020.1801842.
5
Real-time multiple human perception with color-depth cameras on a mobile robot.移动机器人上的彩色深度相机的实时多人感知。
IEEE Trans Cybern. 2013 Oct;43(5):1429-41. doi: 10.1109/TCYB.2013.2275291. Epub 2013 Aug 21.
6
Indirect iterative learning control for a discrete visual servo without a camera-robot model.无相机-机器人模型的离散视觉伺服间接迭代学习控制
IEEE Trans Syst Man Cybern B Cybern. 2007 Aug;37(4):863-76. doi: 10.1109/tsmcb.2007.895355.
7
WHSP-Net: A Weakly-Supervised Approach for 3D Hand Shape and Pose Recovery from a Single Depth Image.WHSP-Net:一种用于从单张深度图像中恢复三维手部形状和姿态的弱监督方法。
Sensors (Basel). 2019 Aug 31;19(17):3784. doi: 10.3390/s19173784.
8
Real-Time Human Action Recognition with a Low-Cost RGB Camera and Mobile Robot Platform.基于低成本 RGB 相机和移动机器人平台的实时人体动作识别。
Sensors (Basel). 2020 May 19;20(10):2886. doi: 10.3390/s20102886.
9
Unknown Object Detection Using a One-Class Support Vector Machine for a Cloud-Robot System.基于单类支持向量机的云机器人系统未知物体检测
Sensors (Basel). 2022 Feb 10;22(4):1352. doi: 10.3390/s22041352.
10
Indoor Place Category Recognition for a Cleaning Robot by Fusing a Probabilistic Approach and Deep Learning.融合概率方法和深度学习的清洁机器人室内场所类别识别。
IEEE Trans Cybern. 2022 Aug;52(8):7265-7276. doi: 10.1109/TCYB.2021.3052499. Epub 2022 Jul 19.

本文引用的文献

1
Recent Advancements in Learning Algorithms for Point Clouds: An Updated Overview.点云学习算法的最新进展:最新综述。
Sensors (Basel). 2022 Feb 10;22(4):1357. doi: 10.3390/s22041357.
2
Analytical Review of Event-Based Camera Depth Estimation Methods and Systems.基于事件相机的深度估计方法与系统分析综述
Sensors (Basel). 2022 Feb 5;22(3):1201. doi: 10.3390/s22031201.
3
Data-driven artificial and spiking neural networks for inverse kinematics in neurorobotics.用于神经机器人学中逆运动学的数据驱动人工神经网络和脉冲神经网络
Patterns (N Y). 2021 Nov 18;3(1):100391. doi: 10.1016/j.patter.2021.100391. eCollection 2022 Jan 14.
4
Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study.评估 Azure Kinect 和 Kinect v2 在步态分析中的姿势跟踪性能与金标准的比较:一项初步研究。
Sensors (Basel). 2020 Sep 8;20(18):5104. doi: 10.3390/s20185104.
5
Deep Learning for 3D Point Clouds: A Survey.用于三维点云的深度学习:综述
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4338-4364. doi: 10.1109/TPAMI.2020.3005434. Epub 2021 Nov 3.
6
Detection and Tracking of Moving Targets for Thermal Infrared Video Sequences.运动目标的检测与跟踪在热红外视频序列中。
Sensors (Basel). 2018 Nov 14;18(11):3944. doi: 10.3390/s18113944.
7
A unifying review of linear gaussian models.线性高斯模型的统一综述。
Neural Comput. 1999 Feb 15;11(2):305-45. doi: 10.1162/089976699300016674.