• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度学习框架在协作式人机装配过程中控制作业序列。

Deep Learning Framework for Controlling Work Sequence in Collaborative Human-Robot Assembly Processes.

机构信息

UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal.

Laboratório Associado de Sistemas Inteligentes, LASI, 4800-058 Guimarães, Portugal.

出版信息

Sensors (Basel). 2023 Jan 3;23(1):553. doi: 10.3390/s23010553.

DOI:10.3390/s23010553
PMID:36617153
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9823442/
Abstract

The human-robot collaboration (HRC) solutions presented so far have the disadvantage that the interaction between humans and robots is based on the human's state or on specific gestures purposely performed by the human, thus increasing the time required to perform a task and slowing down the pace of human labor, making such solutions uninteresting. In this study, a different concept of the HRC system is introduced, consisting of an HRC framework for managing assembly processes that are executed simultaneously or individually by humans and robots. This HRC framework based on deep learning models uses only one type of data, RGB camera data, to make predictions about the collaborative workspace and human action, and consequently manage the assembly process. To validate the HRC framework, an industrial HRC demonstrator was built to assemble a mechanical component. Four different HRC frameworks were created based on the convolutional neural network (CNN) model structures: Faster R-CNN ResNet-50 and ResNet-101, YOLOv2 and YOLOv3. The HRC framework with YOLOv3 structure showed the best performance, showing a mean average performance of 72.26% and allowed the HRC industrial demonstrator to successfully complete all assembly tasks within a desired time window. The HRC framework has proven effective for industrial assembly applications.

摘要

目前提出的人机协作 (HRC) 解决方案存在一个缺点,即人类和机器人之间的交互基于人类的状态或人类故意执行的特定手势,从而增加了执行任务所需的时间并降低了人类劳动的速度,使得此类解决方案变得无趣。在这项研究中,引入了一种不同的 HRC 系统概念,该系统由一个用于管理由人类和机器人同时或单独执行的装配过程的 HRC 框架组成。这种基于深度学习模型的 HRC 框架仅使用一种类型的数据(RGB 相机数据)来对协作工作空间和人类动作进行预测,并相应地管理装配过程。为了验证 HRC 框架,构建了一个工业 HRC 演示器来组装一个机械部件。根据卷积神经网络 (CNN) 模型结构创建了四个不同的 HRC 框架:Faster R-CNN ResNet-50 和 ResNet-101、YOLOv2 和 YOLOv3。具有 YOLOv3 结构的 HRC 框架表现出最佳性能,平均性能达到 72.26%,并使 HRC 工业演示器能够在所需的时间窗口内成功完成所有装配任务。HRC 框架已被证明可有效用于工业装配应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/b98fc1ac8a79/sensors-23-00553-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/c2c15455a0aa/sensors-23-00553-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/aae1bb18a99a/sensors-23-00553-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/d71bf75b058b/sensors-23-00553-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/5b3b99b91a89/sensors-23-00553-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/5764cf840b50/sensors-23-00553-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/664a31137a59/sensors-23-00553-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/73280bbcc602/sensors-23-00553-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/221a7a44dba2/sensors-23-00553-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/b98fc1ac8a79/sensors-23-00553-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/c2c15455a0aa/sensors-23-00553-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/aae1bb18a99a/sensors-23-00553-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/d71bf75b058b/sensors-23-00553-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/5b3b99b91a89/sensors-23-00553-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/5764cf840b50/sensors-23-00553-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/664a31137a59/sensors-23-00553-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/73280bbcc602/sensors-23-00553-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/221a7a44dba2/sensors-23-00553-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f04d/9823442/b98fc1ac8a79/sensors-23-00553-g009.jpg

相似文献

1
Deep Learning Framework for Controlling Work Sequence in Collaborative Human-Robot Assembly Processes.深度学习框架在协作式人机装配过程中控制作业序列。
Sensors (Basel). 2023 Jan 3;23(1):553. doi: 10.3390/s23010553.
2
Egocentric Gesture Recognition Using 3D Convolutional Neural Networks for the Spatiotemporal Adaptation of Collaborative Robots.使用3D卷积神经网络进行协作机器人时空自适应的自我中心手势识别
Front Neurorobot. 2021 Nov 23;15:703545. doi: 10.3389/fnbot.2021.703545. eCollection 2021.
3
An adaptive reinforcement learning-based multimodal data fusion framework for human-robot confrontation gaming.基于自适应强化学习的人机对抗博弈多模态数据融合框架。
Neural Netw. 2023 Jul;164:489-496. doi: 10.1016/j.neunet.2023.04.043. Epub 2023 May 6.
4
Standing-Posture Recognition in Human-Robot Collaboration Based on Deep Learning and the Dempster-Shafer Evidence Theory.基于深度学习和证据理论的人机协作中的站立姿势识别。
Sensors (Basel). 2020 Feb 20;20(4):1158. doi: 10.3390/s20041158.
5
Collaborating eye to eye: Effects of workplace design on the perception of dominance of collaboration robots.眼神交流式协作:工作场所设计对协作机器人主导感认知的影响
Front Robot AI. 2022 Sep 27;9:999308. doi: 10.3389/frobt.2022.999308. eCollection 2022.
6
A Resilient and Effective Task Scheduling Approach for Industrial Human-Robot Collaboration.工业人机协作中的一种有弹性且有效的任务调度方法。
Sensors (Basel). 2022 Jun 29;22(13):4901. doi: 10.3390/s22134901.
7
Exploring the impact of human-robot interaction on workers' mental stress in collaborative assembly tasks.探索人机交互对协作装配任务中工人心理压力的影响。
Appl Ergon. 2024 Apr;116:104224. doi: 10.1016/j.apergo.2024.104224. Epub 2024 Jan 5.
8
Physiological Indicators of Fluency and Engagement during Sequential and Simultaneous Modes of Human-Robot Collaboration.人机协作的顺序模式和同步模式下流畅性与参与度的生理指标
IISE Trans Occup Ergon Hum Factors. 2024 Jan-Jun;12(1-2):97-111. doi: 10.1080/24725838.2023.2287015. Epub 2023 Dec 6.
9
Recognition of Grasping Patterns Using Deep Learning for Human-Robot Collaboration.使用深度学习进行人机协作的抓取模式识别。
Sensors (Basel). 2023 Nov 5;23(21):8989. doi: 10.3390/s23218989.
10
A Cooperative Shared Control Scheme Based on Intention Recognition for Flexible Assembly Manufacturing.一种基于意图识别的柔性装配制造协同共享控制方案。
Front Neurorobot. 2022 Mar 16;16:850211. doi: 10.3389/fnbot.2022.850211. eCollection 2022.

引用本文的文献

1
Eddy Currents Probe Design for NDT Applications: A Review.用于无损检测应用的涡流探头设计:综述
Sensors (Basel). 2024 Sep 7;24(17):5819. doi: 10.3390/s24175819.
2
On the Evaluation of Diverse Vision Systems towards Detecting Human Pose in Collaborative Robot Applications.面向协作机器人应用中人体姿态检测的多样化视觉系统评估
Sensors (Basel). 2024 Jan 17;24(2):578. doi: 10.3390/s24020578.
3
Elderly and visually impaired indoor activity monitoring based on Wi-Fi and Deep Hybrid convolutional neural network.基于 Wi-Fi 和深度混合卷积神经网络的老年和视障人群室内活动监测

本文引用的文献

1
Novel Hybrid Brain-Computer Interface for Virtual Reality Applications Using Steady-State Visual-Evoked Potential-Based Brain-Computer Interface and Electrooculogram-Based Eye Tracking for Increased Information Transfer Rate.用于虚拟现实应用的新型混合脑机接口,采用基于稳态视觉诱发电位的脑机接口和基于眼电图的眼动追踪技术以提高信息传输速率。
Front Neuroinform. 2022 Feb 24;16:758537. doi: 10.3389/fninf.2022.758537. eCollection 2022.
2
Multisensor Inspection of Laser-Brazed Joints in the Automotive Industry.汽车工业中激光钎焊接头的多传感器检测
Sensors (Basel). 2021 Nov 4;21(21):7335. doi: 10.3390/s21217335.
3
Image Generation for 2D-CNN Using Time-Series Signal Features from Foot Gesture Applied to Select Cobot Operating Mode.
Sci Rep. 2023 Dec 18;13(1):22470. doi: 10.1038/s41598-023-48860-5.
4
Regression-Based Camera Pose Estimation through Multi-Level Local Features and Global Features.基于多层次局部特征和全局特征的回归相机位姿估计。
Sensors (Basel). 2023 Apr 18;23(8):4063. doi: 10.3390/s23084063.
5
A Novel Simulated Annealing-Based Hyper-Heuristic Algorithm for Stochastic Parallel Disassembly Line Balancing in Smart Remanufacturing.一种用于智能再制造的随机平行拆卸线平衡的新型基于模拟退火的超启发式算法。
Sensors (Basel). 2023 Feb 2;23(3):1652. doi: 10.3390/s23031652.
基于足底步态时间序列信号特征的 2D-CNN 图像生成及其在协作机器人操作模式选择中的应用。
Sensors (Basel). 2021 Aug 26;21(17):5743. doi: 10.3390/s21175743.
4
Dynamic Acoustic Unit Augmentation with BPE-Dropout for Low-Resource End-to-End Speech Recognition.基于 BPE-Dropout 的动态声学单元增强在低资源端到端语音识别中的应用。
Sensors (Basel). 2021 Apr 28;21(9):3063. doi: 10.3390/s21093063.
5
Human-Robot Perception in Industrial Environments: A Survey.工业环境中的人机感知:调查研究。
Sensors (Basel). 2021 Feb 24;21(5):1571. doi: 10.3390/s21051571.
6
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.