• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于新型Kinect V2的纱管搬运机器人视觉识别与抓取方法。

A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot.

作者信息

Han Jinghai, Liu Bo, Jia Yongle, Jin Shoufeng, Sulowicz Maciej, Glowacz Adam, Królczyk Grzegorz, Li Zhixiong

机构信息

Institute of Rail Transport, Nanjing Vocational Institute of Transport Technology, Nanjing 211188, China.

College of Mechanical and Electrical Engineering, Xi'an Polytechnic University, Xi'an 710600, China.

出版信息

Micromachines (Basel). 2022 May 31;13(6):886. doi: 10.3390/mi13060886.

DOI:10.3390/mi13060886
PMID:35744500
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9227217/
Abstract

This work proposes a Kinect V2-based visual method to solve the human dependence on the yarn bobbin robot in the grabbing operation. In this new method, a Kinect V2 camera is used to produce three-dimensional (3D) yarn-bobbin point cloud data for the robot in a work scenario. After removing the noise point cloud through a proper filtering process, the M-estimator sample consensus (MSAC) algorithm is employed to find the fitting plane of the 3D cloud data; then, the principal component analysis (PCA) is adopted to roughly register the template point cloud and the yarn-bobbin point cloud to define the initial position of the yarn bobbin. Lastly, the iterative closest point (ICP) algorithm is used to achieve precise registration of the 3D cloud data to determine the precise pose of the yarn bobbin. To evaluate the performance of the proposed method, an experimental platform is developed to validate the grabbing operation of the yarn bobbin robot in different scenarios. The analysis results show that the average working time of the robot system is within 10 s, and the grasping success rate is above 80%, which meets the industrial production requirements.

摘要

这项工作提出了一种基于Kinect V2的视觉方法,以解决抓取操作中人类对纱筒机器人的依赖问题。在这种新方法中,使用Kinect V2相机在工作场景中为机器人生成三维(3D)纱筒点云数据。通过适当的滤波过程去除噪声点云后,采用M估计样本一致性(MSAC)算法来找到3D云数据的拟合平面;然后,采用主成分分析(PCA)对模板点云和纱筒点云进行粗略配准,以定义纱筒的初始位置。最后,使用迭代最近点(ICP)算法实现3D云数据的精确配准,以确定纱筒的精确姿态。为了评估所提方法的性能,开发了一个实验平台来验证纱筒机器人在不同场景下的抓取操作。分析结果表明,机器人系统的平均工作时间在10秒以内,抓取成功率高于80%,满足工业生产要求。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/bcd0d69c56c4/micromachines-13-00886-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/67be3df72e29/micromachines-13-00886-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/bafc26ef60eb/micromachines-13-00886-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/7b5e55f8c71d/micromachines-13-00886-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/1b80314a86d2/micromachines-13-00886-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/0fc53deb3aca/micromachines-13-00886-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/ebc49ccbacea/micromachines-13-00886-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/d5e234cc45e9/micromachines-13-00886-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/1e1313da4179/micromachines-13-00886-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/0976fbe36441/micromachines-13-00886-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/bcd0d69c56c4/micromachines-13-00886-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/67be3df72e29/micromachines-13-00886-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/bafc26ef60eb/micromachines-13-00886-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/7b5e55f8c71d/micromachines-13-00886-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/1b80314a86d2/micromachines-13-00886-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/0fc53deb3aca/micromachines-13-00886-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/ebc49ccbacea/micromachines-13-00886-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/d5e234cc45e9/micromachines-13-00886-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/1e1313da4179/micromachines-13-00886-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/0976fbe36441/micromachines-13-00886-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ab61/9227217/bcd0d69c56c4/micromachines-13-00886-g010.jpg

相似文献

1
A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot.一种基于新型Kinect V2的纱管搬运机器人视觉识别与抓取方法。
Micromachines (Basel). 2022 May 31;13(6):886. doi: 10.3390/mi13060886.
2
A Study on the 3D Reconstruction Strategy of a Sheep Body Based on a Kinect v2 Depth Camera Array.基于Kinect v2深度相机阵列的绵羊身体三维重建策略研究
Animals (Basel). 2024 Aug 23;14(17):2457. doi: 10.3390/ani14172457.
3
The Method of Creel Positioning Based on Monocular Vision.基于单目视觉的鱼篓定位方法
Sensors (Basel). 2022 Sep 2;22(17):6657. doi: 10.3390/s22176657.
4
An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System.一种用于基于视觉的机器人抓取系统的改进点云描述符。
Sensors (Basel). 2019 May 14;19(10):2225. doi: 10.3390/s19102225.
5
Three-Dimensional Reconstruction Method of Rapeseed Plants in the Whole Growth Period Using RGB-D Camera.基于 RGB-D 相机的油菜全生育期植株三维重建方法。
Sensors (Basel). 2021 Jul 6;21(14):4628. doi: 10.3390/s21144628.
6
Three-Dimensional Object Recognition and Registration for Robotic Grasping Systems Using a Modified Viewpoint Feature Histogram.使用改进的视点特征直方图的机器人抓取系统的三维物体识别与配准
Sensors (Basel). 2016 Nov 23;16(11):1969. doi: 10.3390/s16111969.
7
Research on Intelligent Robot Point Cloud Grasping in Internet of Things.物联网中智能机器人点云抓取研究
Micromachines (Basel). 2022 Nov 17;13(11):1999. doi: 10.3390/mi13111999.
8
Point cloud registration method for maize plants based on conical surface fitting-ICP.基于圆锥面拟合-ICP的玉米植株点云配准方法
Sci Rep. 2022 Apr 27;12(1):6852. doi: 10.1038/s41598-022-10921-6.
9
Lightweight bobbin yarn detection model for auto-coner with yarn bank.用于带有纱库的自动络筒机的轻质筒子纱检测模型
Sci Rep. 2024 Jul 12;14(1):16136. doi: 10.1038/s41598-024-67196-2.
10
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.一种基于Kinect传感器的快速机器人识别与映射算法。
Sensors (Basel). 2015 Aug 14;15(8):19937-67. doi: 10.3390/s150819937.

本文引用的文献

1
Automatic Super-Surface Removal in Complex 3D Indoor Environments Using Iterative Region-Based RANSAC.使用基于区域迭代 RANSAC 的方法自动去除复杂三维室内环境中的超表面。
Sensors (Basel). 2021 May 27;21(11):3724. doi: 10.3390/s21113724.
2
A Vision Based Detection Method for Narrow Butt Joints and a Robotic Seam Tracking System.基于视觉的窄间隙接头检测方法和机器人焊缝跟踪系统。
Sensors (Basel). 2019 Mar 6;19(5):1144. doi: 10.3390/s19051144.