• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度动态手部模型:一种利用时间信息对视频源中的手部操作策略进行标注的深度神经架构。

DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information.

作者信息

Arapi Visar, Della Santina Cosimo, Bacciu Davide, Bianchi Matteo, Bicchi Antonio

机构信息

Centro di Ricerca "Enrico Piaggio," Università di Pisa, Pisa, Italy.

Dipartimento di Ingegneria dell'Informazione, Università di Pisa, Pisa, Italy.

出版信息

Front Neurorobot. 2018 Dec 17;12:86. doi: 10.3389/fnbot.2018.00086. eCollection 2018.

DOI:10.3389/fnbot.2018.00086
PMID:30618707
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6304372/
Abstract

Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators.

摘要

人类能够凭借其手部固有的适应性和柔顺性与环境进行复杂的操纵互动。最近,软机器人操纵试图通过设计可变形但坚固的末端执行器来重现这种非凡的行为。为了实现这一目标,对人类行为的研究对于正确指导机器人手的技术发展至关重要,这种机器人手能够像人类实际那样成功地利用环境约束。在机器人学可用于实现这一目标的不同工具中,深度学习已成为一种有前途的方法,用于在人工层面研究并随后实施神经科学观察结果。然而,当前的方法往往忽视了手部姿态识别问题的动态性质,限制了这些技术在识别支撑动作生成的操纵原语序列方面的有效性,例如在与环境进行有目的互动期间。在这项工作中,我们提出了一种基于视觉的监督式手部姿态识别方法,该方法首次考虑了时间信息,以识别抓取和操纵任务中有意义的动作序列。更具体地说,我们应用深度神经网络从手部姿态图像中自动学习特征,这些图像由从带有物体和外部环境约束的抓取和操纵任务视频中提取的帧组成。为了进行训练,视频被划分为多个区间,每个区间由人工监督者与一个特定动作相关联。所提出的算法结合了一个卷积神经网络来检测每个视频帧内的手部,以及一个循环神经网络来预测当前帧中的手部动作,同时考虑前一帧中执行的动作历史。我们在两个以手部为中心的动态策略数据集上进行了实验验证,在这些数据集中,受试者定期与物体和环境进行互动。所提出的架构在两个数据集上都取得了非常好的分类准确率,性能高达94%,并且优于现有技术。这项研究的成果可以成功应用于机器人学,例如用于软拟人操纵器的规划和控制。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/4556c195762d/fnbot-12-00086-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/bb70359db5b4/fnbot-12-00086-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/917c192f5519/fnbot-12-00086-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/a0be02e458f5/fnbot-12-00086-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/b1b6ab7fc879/fnbot-12-00086-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/213da366b9c6/fnbot-12-00086-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/e2c01ea3df3c/fnbot-12-00086-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/1b7b92fd2c4b/fnbot-12-00086-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/3fd2117513e8/fnbot-12-00086-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/2c426e627a94/fnbot-12-00086-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/d96c624166d1/fnbot-12-00086-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/4556c195762d/fnbot-12-00086-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/bb70359db5b4/fnbot-12-00086-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/917c192f5519/fnbot-12-00086-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/a0be02e458f5/fnbot-12-00086-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/b1b6ab7fc879/fnbot-12-00086-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/213da366b9c6/fnbot-12-00086-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/e2c01ea3df3c/fnbot-12-00086-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/1b7b92fd2c4b/fnbot-12-00086-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/3fd2117513e8/fnbot-12-00086-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/2c426e627a94/fnbot-12-00086-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/d96c624166d1/fnbot-12-00086-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6038/6304372/4556c195762d/fnbot-12-00086-g0011.jpg

相似文献

1
DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information.深度动态手部模型:一种利用时间信息对视频源中的手部操作策略进行标注的深度神经架构。
Front Neurorobot. 2018 Dec 17;12:86. doi: 10.3389/fnbot.2018.00086. eCollection 2018.
2
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.基于层次卷积特征的层次递归神经网络哈希图像检索
IEEE Trans Image Process. 2018;27(1):106-120. doi: 10.1109/TIP.2017.2755766.
3
A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.一种使用域转移深度卷积神经网络的新型端到端生物医学图像分类器。
Comput Methods Programs Biomed. 2017 Mar;140:283-293. doi: 10.1016/j.cmpb.2016.12.019. Epub 2017 Jan 6.
4
A Deep Learning-Based End-to-End Composite System for Hand Detection and Gesture Recognition.基于深度学习的手检测与手势识别端到端复合系统。
Sensors (Basel). 2019 Nov 30;19(23):5282. doi: 10.3390/s19235282.
5
Grasping learning, optimization, and knowledge transfer in the robotics field.掌握机器人技术领域的学习、优化和知识转移。
Sci Rep. 2022 Mar 16;12(1):4481. doi: 10.1038/s41598-022-08276-z.
6
Real-time multiple spatiotemporal action localization and prediction approach using deep learning.基于深度学习的实时多时空动作定位与预测方法。
Neural Netw. 2020 Aug;128:331-344. doi: 10.1016/j.neunet.2020.05.017. Epub 2020 May 19.
7
Object Manipulation with an Anthropomorphic Robotic Hand via Deep Reinforcement Learning with a Synergy Space of Natural Hand Poses.基于自然手位协同空间的深度强化学习的拟人机器人手操作
Sensors (Basel). 2021 Aug 5;21(16):5301. doi: 10.3390/s21165301.
8
Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.基于卷积神经网络的深度学习应用于肌电图数据:一种用于假手运动分类的资源。
Front Neurorobot. 2016 Sep 7;10:9. doi: 10.3389/fnbot.2016.00009. eCollection 2016.
9
Self-Supervised Learning to Detect Key Frames in Videos.自监督学习在视频关键帧检测中的应用
Sensors (Basel). 2020 Dec 4;20(23):6941. doi: 10.3390/s20236941.
10
Estimating the Orientation of Objects from Tactile Sensing Data Using Machine Learning Methods and Visual Frames of Reference.使用机器学习方法和视觉参考框架从触觉传感数据估计物体的方向。
Sensors (Basel). 2019 May 17;19(10):2285. doi: 10.3390/s19102285.

引用本文的文献

1
The Treachery of Images: How Realism Influences Brain and Behavior.《图像的背叛:现实主义如何影响大脑和行为》
Trends Cogn Sci. 2021 Jun;25(6):506-519. doi: 10.1016/j.tics.2021.02.008. Epub 2021 Mar 25.
2
A Piezoresistive Array Armband With Reduced Number of Sensors for Hand Gesture Recognition.一种用于手势识别的传感器数量减少的压阻式阵列臂带。
Front Neurorobot. 2020 Jan 17;13:114. doi: 10.3389/fnbot.2019.00114. eCollection 2019.

本文引用的文献

1
Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions.伸出援手:在复杂的自我中心交互中检测手部动作并识别活动。
Proc IEEE Int Conf Comput Vis. 2015 Dec;2015:1949-1957. doi: 10.1109/ICCV.2015.226. Epub 2016 Feb 18.
2
Postural Hand Synergies during Environmental Constraint Exploitation.利用环境约束时的姿势手部协同作用。
Front Neurorobot. 2017 Aug 29;11:41. doi: 10.3389/fnbot.2017.00041. eCollection 2017.
3
Recent Data Sets on Object Manipulation: A Survey.近期对象操纵数据集综述
Big Data. 2016 Dec;4(4):197-216. doi: 10.1089/big.2016.0042.
4
A Synergy-Based Optimally Designed Sensing Glove for Functional Grasp Recognition.一种基于协同作用的功能抓握识别优化设计传感手套。
Sensors (Basel). 2016 Jun 2;16(6):811. doi: 10.3390/s16060811.
5
What Makes for Effective Detection Proposals?什么因素能促成有效的检测提议?
IEEE Trans Pattern Anal Mach Intell. 2016 Apr;38(4):814-30. doi: 10.1109/TPAMI.2015.2465908.
6
Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands.手部协同作用:整合机器人技术与神经科学以理解生物手和人工手的控制
Phys Life Rev. 2016 Jul;17:1-23. doi: 10.1016/j.plrev.2016.02.001. Epub 2016 Feb 3.
7
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
8
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.
9
A theoretical model of phase transitions in human hand movements.人类手部运动相变的理论模型。
Biol Cybern. 1985;51(5):347-56. doi: 10.1007/BF00336922.