• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于激光点检测的辅助抓握及其在轮椅式机械臂中的应用。

Assistive Grasping Based on Laser-point Detection with Application to Wheelchair-mounted Robotic Arms.

机构信息

Industrial Research Institute of Robotics and Intelligent Equipment, Harbin Institute of Technology, Weihai 264209, China.

Department of Industrial Engineering, University of Houston, Houston, TX 77004, USA.

出版信息

Sensors (Basel). 2019 Jan 14;19(2):303. doi: 10.3390/s19020303.

DOI:10.3390/s19020303
PMID:30646513
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6359181/
Abstract

As the aging of the population becomes more severe, wheelchair-mounted robotic arms (WMRAs) are gaining an increased amount of attention. Laser pointer interactions are an attractive method enabling humans to unambiguously point out objects and pick them up. In addition, they bring about a greater sense of participation in the interaction process as an intuitive interaction mode. However, the issue of human⁻robot interactions remains to be properly tackled, and traditional laser point interactions still suffer from poor real-time performance and low accuracy amid dynamic backgrounds. In this study, combined with an advanced laser point detection method and an improved pose estimation algorithm, a laser pointer is used to facilitate the interactions between humans and a WMRA in an indoor environment. Assistive grasping using a laser selection consists of two key steps. In the first step, the images captured using an RGB-D camera are pre-processed, and then fed to a convolutional neural network (CNN) to determine the 2D coordinates of the laser point and objects within the image. Meanwhile, the centroid coordinates of the selected object are also obtained using the depth information. In this way, the object to be picked up and its location are determined. The experimental results show that the laser point can be detected with almost 100% accuracy in a complex environment. In the second step, a compound pose-estimation algorithm aiming at a sparse use of multi-view templates is applied, which consists of both coarse- and precise-matching of the target to the template objects, greatly improving the grasping performance. The proposed algorithms were implemented on a Kinova Jaco robotic arm, and the experimental results demonstrate their effectiveness. Compared with commonly accepted methods, the time consumption of the pose generation can be reduced from 5.36 to 4.43 s, and synchronously, the pose estimation error is significantly improved from 21.31% to 3.91%.

摘要

随着人口老龄化的加剧,轮椅式机械臂(WMRAs)越来越受到关注。激光指针交互是一种很有吸引力的方法,它可以使用户能够明确地指出物体并将其拾起。此外,作为一种直观的交互模式,它还带来了更大的交互参与感。然而,人机交互的问题仍有待妥善解决,传统的激光点交互在动态背景下仍然存在实时性能差和精度低的问题。在这项研究中,结合先进的激光点检测方法和改进的姿态估计算法,使用激光笔在室内环境中促进人与 WMRAs 之间的交互。使用激光选择辅助抓取包括两个关键步骤。在第一步中,对 RGB-D 相机拍摄的图像进行预处理,然后将其输入卷积神经网络(CNN),以确定图像中激光点和物体的 2D 坐标。同时,还可以使用深度信息获取所选物体的质心坐标。这样,就可以确定要拾取的物体及其位置。实验结果表明,在复杂环境中,激光点的检测准确率几乎可以达到 100%。在第二步中,应用了一种针对稀疏使用多视图模板的复合姿态估计算法,该算法包括目标与模板物体的粗匹配和精匹配,极大地提高了抓取性能。所提出的算法在 Kinova Jaco 机械臂上实现,实验结果证明了其有效性。与常用方法相比,姿态生成的时间消耗可以从 5.36 秒减少到 4.43 秒,同时姿态估计误差从 21.31%显著提高到 3.91%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/e0898aba84c4/sensors-19-00303-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/c4b56674121a/sensors-19-00303-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/2dfe347679b5/sensors-19-00303-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/bfe8ca78132d/sensors-19-00303-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/a79fff7be327/sensors-19-00303-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/25ea984f2f44/sensors-19-00303-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/75a85ee9722c/sensors-19-00303-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/2d170e8768cc/sensors-19-00303-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/daeb19208354/sensors-19-00303-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/c401135ee465/sensors-19-00303-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/9bac290f9e9b/sensors-19-00303-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/40e9f6efb88e/sensors-19-00303-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/af5b7777be53/sensors-19-00303-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/03007dee6ce3/sensors-19-00303-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/de0bf17c8f8d/sensors-19-00303-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/e0898aba84c4/sensors-19-00303-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/c4b56674121a/sensors-19-00303-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/2dfe347679b5/sensors-19-00303-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/bfe8ca78132d/sensors-19-00303-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/a79fff7be327/sensors-19-00303-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/25ea984f2f44/sensors-19-00303-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/75a85ee9722c/sensors-19-00303-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/2d170e8768cc/sensors-19-00303-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/daeb19208354/sensors-19-00303-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/c401135ee465/sensors-19-00303-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/9bac290f9e9b/sensors-19-00303-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/40e9f6efb88e/sensors-19-00303-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/af5b7777be53/sensors-19-00303-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/03007dee6ce3/sensors-19-00303-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/de0bf17c8f8d/sensors-19-00303-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e283/6359181/e0898aba84c4/sensors-19-00303-g015.jpg

相似文献

1
Assistive Grasping Based on Laser-point Detection with Application to Wheelchair-mounted Robotic Arms.基于激光点检测的辅助抓握及其在轮椅式机械臂中的应用。
Sensors (Basel). 2019 Jan 14;19(2):303. doi: 10.3390/s19020303.
2
Wheelchair-mounted robotic arms: a survey of occupational therapists' practices and perspectives.轮椅式机械臂:职业治疗师实践与观点的调查。
Disabil Rehabil Assist Technol. 2023 Nov;18(8):1421-1430. doi: 10.1080/17483107.2021.2017030. Epub 2021 Dec 22.
3
Single-Camera Multi-View 6DoF pose estimation for robotic grasping.用于机器人抓取的单相机多视图6自由度姿态估计
Front Neurorobot. 2023 Jun 13;17:1136882. doi: 10.3389/fnbot.2023.1136882. eCollection 2023.
4
Object Affordance-Based Implicit Interaction for Wheelchair-Mounted Robotic Arm Using a Laser Pointer.基于物体可供性的激光指针轮椅机械臂隐式交互
Sensors (Basel). 2023 May 4;23(9):4477. doi: 10.3390/s23094477.
5
Fast and precise 6D pose estimation of textureless objects using the point cloud and gray image.利用点云和灰度图像对无纹理物体进行快速精确的6D位姿估计。
Appl Opt. 2018 Oct 1;57(28):8154-8165. doi: 10.1364/AO.57.008154.
6
Single RGB Image 6D Object Grasping System Using Pixel-Wise Voting Network.基于像素级投票网络的单RGB图像6D物体抓取系统
Micromachines (Basel). 2022 Feb 13;13(2):293. doi: 10.3390/mi13020293.
7
Real-Time Fruit Recognition and Grasping Estimation for Robotic Apple Harvesting.实时水果识别与机器人采摘苹果的抓取估计。
Sensors (Basel). 2020 Oct 4;20(19):5670. doi: 10.3390/s20195670.
8
Optimal Design of a Soft Robotic Gripper for Grasping Unknown Objects.用于抓取未知物体的软体机器人夹持器的优化设计。
Soft Robot. 2018 Aug;5(4):452-465. doi: 10.1089/soro.2017.0121. Epub 2018 May 9.
9
Object Grasp Control of a 3D Robot Arm by Combining EOG Gaze Estimation and Camera-Based Object Recognition.通过结合眼电图(EOG)注视估计和基于摄像头的物体识别实现3D机器人手臂的物体抓取控制。
Biomimetics (Basel). 2023 May 18;8(2):208. doi: 10.3390/biomimetics8020208.
10
A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography.基于表面肌电信号遥控的视觉引导协作机器人抓取系统。
Sensors (Basel). 2018 Jul 20;18(7):2366. doi: 10.3390/s18072366.

引用本文的文献

1
Intention Reasoning for User Action Sequences via Fusion of Object Task and Object Action Affordances Based on Dempster-Shafer Theory.基于邓普斯特-谢弗理论,通过融合对象任务与对象动作能力实现用户动作序列的意图推理
Sensors (Basel). 2025 Mar 22;25(7):1992. doi: 10.3390/s25071992.
2
G-RCenterNet: Reinforced CenterNet for Robotic Arm Grasp Detection.G-RCenterNet:用于机器人手臂抓取检测的强化CenterNet
Sensors (Basel). 2024 Dec 20;24(24):8141. doi: 10.3390/s24248141.
3
Object Affordance-Based Implicit Interaction for Wheelchair-Mounted Robotic Arm Using a Laser Pointer.

本文引用的文献

1
A Classification-Lock Tracking Strategy Allowing a Person-Following Robot to Operate in a Complicated Indoor Environment.一种分类锁定跟踪策略,使跟随人型机器人能够在复杂的室内环境中运行。
Sensors (Basel). 2018 Nov 12;18(11):3903. doi: 10.3390/s18113903.
2
Development, Dynamic Modeling, and Multi-Modal Control of a Therapeutic Exoskeleton for Upper Limb Rehabilitation Training.治疗性上肢康复外骨骼的开发、动态建模和多模态控制。
Sensors (Basel). 2018 Oct 24;18(11):3611. doi: 10.3390/s18113611.
3
Gabor Convolutional Networks.Gabor 卷积网络。
基于物体可供性的激光指针轮椅机械臂隐式交互
Sensors (Basel). 2023 May 4;23(9):4477. doi: 10.3390/s23094477.
IEEE Trans Image Process. 2018 Sep;27(9):4357-4366. doi: 10.1109/TIP.2018.2835143.