• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于单目相机的金属物体机器人抓取策略。

(MARGOT) Monocular Camera-Based Robot Grasping Strategy for Metallic Objects.

机构信息

BE-CEM Beams Department, Controls, Electronics and Mechatronics Group, European Organization for Nuclear Research (CERN), 1217 Geneva, Switzerland.

Interactive Robotic Systems Lab, Jaume I University of Castellón, 12006 Castellón de la Plana, Spain.

出版信息

Sensors (Basel). 2023 Jun 5;23(11):5344. doi: 10.3390/s23115344.

DOI:10.3390/s23115344
PMID:37300071
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10256039/
Abstract

Robotic handling of objects is not always a trivial assignment, even in teleoperation where, in most cases, this might lead to stressful labor for operators. To reduce the task difficulty, supervised motions could be performed in safe scenarios to reduce the workload in these non-critical steps by using machine learning and computer vision techniques. This paper describes a novel grasping strategy based on a groundbreaking geometrical analysis which extracts diametrically opposite points taking into account surface smoothing (even those target objects that might conform highly complex shapes) to guarantee the uniformity of the grasping. It uses a monocular camera, as we are often facing space restrictions that generate the need to use laparoscopic cameras integrated in the tools, to recognize and isolate targets from the background, estimating their spatial coordinates and providing the best possible stable grasping points for both feature and featureless objects. It copes with reflections and shadows produced by light sources (which require extra effort to extract their geometrical properties) in unstructured facilities such as nuclear power plants or particle accelerators on scientific equipment. Based on the experimental results, utilizing a specialized dataset improved the detection of metallic objects in low-contrast environments, resulting in the successful application of the algorithm with error rates in the scale of millimeters in the majority of repeatability and accuracy tests.

摘要

机器人处理物体并不总是一件简单的任务,即使在遥操作中,这种操作也可能会给操作人员带来很大的压力。为了降低任务难度,可以在安全的场景中执行监督运动,通过使用机器学习和计算机视觉技术,减少这些非关键步骤的工作量。本文描述了一种新颖的抓取策略,该策略基于开创性的几何分析,考虑到表面平滑度(即使是那些可能具有高度复杂形状的目标物体),提取直径相对的点,以保证抓取的均匀性。它使用单目摄像机,因为我们经常面临空间限制,需要使用集成在工具中的腹腔镜摄像机,从背景中识别和隔离目标,并估计其空间坐标,为有特征和无特征的物体提供尽可能稳定的抓取点。它可以应对光源产生的反射和阴影(需要额外的努力来提取它们的几何特性),在核电厂或粒子加速器等非结构化设施中的科学设备上。基于实验结果,利用专门的数据集提高了在低对比度环境下对金属物体的检测能力,使得该算法在大多数重复性和准确性测试中都能成功应用,误差率在毫米级。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/03cd925752aa/sensors-23-05344-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/e95f7c1e0a19/sensors-23-05344-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/10b8089a7bcd/sensors-23-05344-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/541b31a18cf2/sensors-23-05344-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/7e142e4b4d86/sensors-23-05344-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/8e00b3d834fd/sensors-23-05344-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/6129ff7201aa/sensors-23-05344-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/4aebd8884b7b/sensors-23-05344-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/a40250783b96/sensors-23-05344-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/d95dfc462d4d/sensors-23-05344-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/e8469aa8c392/sensors-23-05344-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/1eddae76e92e/sensors-23-05344-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/097cf4758728/sensors-23-05344-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/5afae9b4cb01/sensors-23-05344-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/5a65f691de76/sensors-23-05344-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/dbc41860fb1c/sensors-23-05344-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/222de872d436/sensors-23-05344-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/ec183aa7c66b/sensors-23-05344-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/9e5d37631dbf/sensors-23-05344-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/8274a0ba7a3c/sensors-23-05344-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/36931e48f7da/sensors-23-05344-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/98ef9895732e/sensors-23-05344-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/948c883b687d/sensors-23-05344-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/03cd925752aa/sensors-23-05344-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/e95f7c1e0a19/sensors-23-05344-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/10b8089a7bcd/sensors-23-05344-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/541b31a18cf2/sensors-23-05344-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/7e142e4b4d86/sensors-23-05344-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/8e00b3d834fd/sensors-23-05344-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/6129ff7201aa/sensors-23-05344-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/4aebd8884b7b/sensors-23-05344-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/a40250783b96/sensors-23-05344-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/d95dfc462d4d/sensors-23-05344-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/e8469aa8c392/sensors-23-05344-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/1eddae76e92e/sensors-23-05344-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/097cf4758728/sensors-23-05344-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/5afae9b4cb01/sensors-23-05344-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/5a65f691de76/sensors-23-05344-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/dbc41860fb1c/sensors-23-05344-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/222de872d436/sensors-23-05344-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/ec183aa7c66b/sensors-23-05344-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/9e5d37631dbf/sensors-23-05344-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/8274a0ba7a3c/sensors-23-05344-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/36931e48f7da/sensors-23-05344-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/98ef9895732e/sensors-23-05344-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/948c883b687d/sensors-23-05344-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ffd5/10256039/03cd925752aa/sensors-23-05344-g023.jpg

相似文献

1
(MARGOT) Monocular Camera-Based Robot Grasping Strategy for Metallic Objects.基于单目相机的金属物体机器人抓取策略。
Sensors (Basel). 2023 Jun 5;23(11):5344. doi: 10.3390/s23115344.
2
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.
3
A Passively Conforming Soft Robotic Gripper with Three-Dimensional Negative Bending Stiffness Fingers.一种具有三维负弯曲刚度手指的被动顺应软机器人夹持器。
Soft Robot. 2023 Jun;10(3):556-567. doi: 10.1089/soro.2021.0200. Epub 2023 Feb 28.
4
Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset.基于事件的机器人抓取检测与神经形态视觉传感器及事件抓取数据集
Front Neurorobot. 2020 Oct 8;14:51. doi: 10.3389/fnbot.2020.00051. eCollection 2020.
5
Adaptive Variable Stiffness Particle Phalange for Robust and Durable Robotic Grasping.用于稳健且持久机器人抓取的自适应可变刚度颗粒指骨
Soft Robot. 2020 Dec;7(6):743-757. doi: 10.1089/soro.2019.0089. Epub 2020 Apr 22.
6
Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets.用于金属目标机器人任务干预的单目稳健深度估计视觉系统。
Sensors (Basel). 2019 Jul 22;19(14):3220. doi: 10.3390/s19143220.
7
Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.脑机接口与视觉引导自主机器人技术的融合可提高抓握过程中神经假肢手臂的性能。
J Neuroeng Rehabil. 2016 Mar 18;13:28. doi: 10.1186/s12984-016-0134-9.
8
Single-Camera Multi-View 6DoF pose estimation for robotic grasping.用于机器人抓取的单相机多视图6自由度姿态估计
Front Neurorobot. 2023 Jun 13;17:1136882. doi: 10.3389/fnbot.2023.1136882. eCollection 2023.
9
An instrumented glove for grasp specification in virtual-reality-based point-and-direct telerobotics.一种用于基于虚拟现实的指向和直接操作远程机器人技术中抓握规范的仪器手套。
IEEE Trans Syst Man Cybern B Cybern. 1997 Oct;27(5):835-46. doi: 10.1109/3477.623236.
10
A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography.基于表面肌电信号遥控的视觉引导协作机器人抓取系统。
Sensors (Basel). 2018 Jul 20;18(7):2366. doi: 10.3390/s18072366.

引用本文的文献

1
G-RCenterNet: Reinforced CenterNet for Robotic Arm Grasp Detection.G-RCenterNet:用于机器人手臂抓取检测的强化CenterNet
Sensors (Basel). 2024 Dec 20;24(24):8141. doi: 10.3390/s24248141.

本文引用的文献

1
Pixel-Reasoning-Based Robotics Fine Grasping for Novel Objects with Deep EDINet Structure.基于像素推理的机器人对具有深度 EDINet 结构的新物体的精细抓取。
Sensors (Basel). 2022 Jun 4;22(11):4283. doi: 10.3390/s22114283.
2
Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer.迈向稳健的单目深度估计:混合数据集以实现零样本跨数据集迁移。
IEEE Trans Pattern Anal Mach Intell. 2022 Mar;44(3):1623-1637. doi: 10.1109/TPAMI.2020.3019967. Epub 2022 Feb 3.
3
Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets.
用于金属目标机器人任务干预的单目稳健深度估计视觉系统。
Sensors (Basel). 2019 Jul 22;19(14):3220. doi: 10.3390/s19143220.
4
Focal Loss for Dense Object Detection.用于密集目标检测的焦散损失
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):318-327. doi: 10.1109/TPAMI.2018.2858826. Epub 2018 Jul 23.
5
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
6
A quantitative evaluation of confidence measures for stereo vision.立体视觉置信度度量的定量评估。
IEEE Trans Pattern Anal Mach Intell. 2012 Nov;34(11):2121-33. doi: 10.1109/TPAMI.2012.46.
7
Stereo processing by semiglobal matching and mutual information.通过半全局匹配和互信息进行立体处理。
IEEE Trans Pattern Anal Mach Intell. 2008 Feb;30(2):328-41. doi: 10.1109/TPAMI.2007.1166.
8
One-shot learning of object categories.物体类别的一次性学习。
IEEE Trans Pattern Anal Mach Intell. 2006 Apr;28(4):594-611. doi: 10.1109/TPAMI.2006.79.