• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从视觉信息推断相互作用力,无需使用物理力传感器。

Inferring Interaction Force from Visual Information without Using Physical Force Sensors.

机构信息

Department of Software and Computer Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon 16499, Korea.

Department of Mechanical, Robotics and Energy Engineering, Dongguk University, 30, Pildong-ro 1gil, Jung-gu, Seoul 04620, Korea.

出版信息

Sensors (Basel). 2017 Oct 26;17(11):2455. doi: 10.3390/s17112455.

DOI:10.3390/s17112455
PMID:29072597
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5713494/
Abstract

In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.

摘要

在本文中,我们提出了一种使用视觉信息而非力传感器的交互力估计方法。具体来说,我们提出了一种新颖的基于深度学习的方法,仅使用连续图像来估计目标物体的交互力,其中物体的形状被外力改变。通过视觉形状变化可以估计施加在目标上的力。但是,图像中的形状差异不是很明显。为了解决这个问题,我们提出了一种基于全连接层的循环神经网络的深度模型,该模型可以从视觉表示中模拟复杂的时间动态。广泛的评估表明,所提出的学习模型仅使用相应的连续图像就能成功地估计交互力,特别是在三个由不同材料制成的物体(海绵、PET 瓶、人体手臂和管子)的情况下。该方法预测的力与力传感器测量的力非常相似。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/f855734d4a73/sensors-17-02455-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/3e78e1a66890/sensors-17-02455-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/8109cb43f8ad/sensors-17-02455-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/3069d022eb01/sensors-17-02455-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/a00254768149/sensors-17-02455-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/cbd8816ae410/sensors-17-02455-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/15b78497f492/sensors-17-02455-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/959abaf2e211/sensors-17-02455-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/69277bcedb1b/sensors-17-02455-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/a7c881f45c80/sensors-17-02455-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/ee0662e163fb/sensors-17-02455-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/f855734d4a73/sensors-17-02455-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/3e78e1a66890/sensors-17-02455-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/8109cb43f8ad/sensors-17-02455-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/3069d022eb01/sensors-17-02455-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/a00254768149/sensors-17-02455-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/cbd8816ae410/sensors-17-02455-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/15b78497f492/sensors-17-02455-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/959abaf2e211/sensors-17-02455-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/69277bcedb1b/sensors-17-02455-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/a7c881f45c80/sensors-17-02455-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/ee0662e163fb/sensors-17-02455-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/f855734d4a73/sensors-17-02455-g011.jpg

相似文献

1
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.从视觉信息推断相互作用力,无需使用物理力传感器。
Sensors (Basel). 2017 Oct 26;17(11):2455. doi: 10.3390/s17112455.
2
An Efficient Three-Dimensional Convolutional Neural Network for Inferring Physical Interaction Force from Video.一种用于从视频中推断物理相互作用力的高效三维卷积神经网络。
Sensors (Basel). 2019 Aug 17;19(16):3579. doi: 10.3390/s19163579.
3
Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition.从视觉到触觉的迁移学习:用于视触 3D 物体识别的混合深度卷积神经网络。
Sensors (Basel). 2020 Dec 27;21(1):113. doi: 10.3390/s21010113.
4
Learning efficient haptic shape exploration with a rigid tactile sensor array.使用刚性触觉传感器阵列学习高效的触觉形状探索。
PLoS One. 2020 Jan 2;15(1):e0226880. doi: 10.1371/journal.pone.0226880. eCollection 2020.
5
Deep learning with 4D spatio-temporal data representations for OCT-based force estimation.基于 OCT 的力估计的 4D 时空数据表示的深度学习。
Med Image Anal. 2020 Aug;64:101730. doi: 10.1016/j.media.2020.101730. Epub 2020 May 23.
6
Dynamic-Vision-Based Force Measurements Using Convolutional Recurrent Neural Networks.基于卷积递归神经网络的动态视觉测力。
Sensors (Basel). 2020 Aug 10;20(16):4469. doi: 10.3390/s20164469.
7
Lifelong 3D object recognition and grasp synthesis using dual memory recurrent self-organization networks.利用双记忆递归自组织网络进行终生 3D 物体识别和抓取合成。
Neural Netw. 2022 Jun;150:167-180. doi: 10.1016/j.neunet.2022.02.027. Epub 2022 Mar 8.
8
Hand-Object Contact Force Estimation from Markerless Visual Tracking.基于无标记视觉跟踪的手部-物体接触力估计
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2883-2896. doi: 10.1109/TPAMI.2017.2759736. Epub 2017 Oct 26.
9
Spatio-temporal deep learning models for tip force estimation during needle insertion.用于针插入过程中尖端力估计的时空深度学习模型。
Int J Comput Assist Radiol Surg. 2019 Sep;14(9):1485-1493. doi: 10.1007/s11548-019-02006-z. Epub 2019 May 30.
10
From deep learning to transfer learning for the prediction of skeletal muscle forces.从深度学习到迁移学习预测骨骼肌力。
Med Biol Eng Comput. 2019 May;57(5):1049-1058. doi: 10.1007/s11517-018-1940-y. Epub 2018 Dec 14.

引用本文的文献

1
A Transparent Teleoperated Robotic Surgical System with Predictive Haptic Feedback and Force Modelling.具有预测力反馈和力建模的透明遥操作机器人手术系统。
Sensors (Basel). 2022 Dec 13;22(24):9770. doi: 10.3390/s22249770.
2
A Force-Feedback Methodology for Teleoperated Suturing Task in Robotic-Assisted Minimally Invasive Surgery.力反馈在机器人辅助微创手术远程缝合任务中的应用方法。
Sensors (Basel). 2022 Oct 14;22(20):7829. doi: 10.3390/s22207829.
3
Cross-Modal Reconstruction for Tactile Signal in Human-Robot Interaction.人-机交互中的触觉信号的跨模态重建。

本文引用的文献

1
Gaussian Process Regression for Sensorless Grip Force Estimation of Cable Driven Elongated Surgical Instruments.用于电缆驱动细长型手术器械无传感器握力估计的高斯过程回归
IEEE Robot Autom Lett. 2017 Jul;2(3):1312-1319. doi: 10.1109/LRA.2017.2666420. Epub 2017 Feb 8.
2
A Force-Sensing System on Legs for Biomimetic Hexapod Robots Interacting with Unstructured Terrain.用于与非结构化地形交互的仿生六足机器人腿部力传感系统
Sensors (Basel). 2017 Jun 27;17(7):1514. doi: 10.3390/s17071514.
3
Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.
Sensors (Basel). 2022 Aug 29;22(17):6517. doi: 10.3390/s22176517.
4
Vision-Based Suture Tensile Force Estimation in Robotic Surgery.基于视觉的机器人手术缝线张力估计。
Sensors (Basel). 2020 Dec 26;21(1):110. doi: 10.3390/s21010110.
5
Dynamic-Vision-Based Force Measurements Using Convolutional Recurrent Neural Networks.基于卷积递归神经网络的动态视觉测力。
Sensors (Basel). 2020 Aug 10;20(16):4469. doi: 10.3390/s20164469.
6
Future-Frame Prediction for Fast-Moving Objects with Motion Blur.带运动模糊的快速移动物体的未来帧预测。
Sensors (Basel). 2020 Aug 6;20(16):4394. doi: 10.3390/s20164394.
7
Knocking and Listening: Learning Mechanical Impulse Response for Understanding Surface Characteristics.敲击与监听:学习机械脉冲响应以理解表面特性。
Sensors (Basel). 2020 Jan 9;20(2):369. doi: 10.3390/s20020369.
8
A Clamping Force Estimation Method Based on a Joint Torque Disturbance Observer Using PSO-BPNN for Cable-Driven Surgical Robot End-Effectors.基于 PS0-BPNN 的联合转矩干扰观测器的电缆驱动手术机器人末端执行器的夹紧力估计方法。
Sensors (Basel). 2019 Dec 1;19(23):5291. doi: 10.3390/s19235291.
9
Transparent and Flexible Mayan-Pyramid-based Pressure Sensor using Facile-Transferred Indium tin Oxide for Bimodal Sensor Applications.基于透明且柔韧的玛雅金字塔结构的压力传感器,采用简易转移氧化铟锡,实现双模传感器应用。
Sci Rep. 2019 Oct 1;9(1):14040. doi: 10.1038/s41598-019-50247-4.
10
An Efficient Three-Dimensional Convolutional Neural Network for Inferring Physical Interaction Force from Video.一种用于从视频中推断物理相互作用力的高效三维卷积神经网络。
Sensors (Basel). 2019 Aug 17;19(16):3579. doi: 10.3390/s19163579.
用于康复机器人设备中人-机交互感测的多轴力传感器。
Sensors (Basel). 2017 Jun 5;17(6):1294. doi: 10.3390/s17061294.
4
Towards Retrieving Force Feedback in Robotic-Assisted Surgery: A Supervised Neuro-Recurrent-Vision Approach.面向机器人辅助手术中力反馈的恢复:一种监督神经递归视觉方法。
IEEE Trans Haptics. 2017 Jul-Sep;10(3):431-443. doi: 10.1109/TOH.2016.2640289. Epub 2016 Dec 15.
5
Long-Term Recurrent Convolutional Networks for Visual Recognition and Description.长期递归卷积网络的视觉识别与描述。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):677-691. doi: 10.1109/TPAMI.2016.2599174. Epub 2016 Sep 1.
6
A Novel Tactile Sensor with Electromagnetic Induction and Its Application on Stick-Slip Interaction Detection.一种新型电磁感应触觉传感器及其在粘滑相互作用检测中的应用
Sensors (Basel). 2016 Mar 24;16(4):430. doi: 10.3390/s16040430.
7
A finger-shaped tactile sensor for fabric surfaces evaluation by 2-dimensional active sliding touch.一种用于通过二维主动滑动触摸评估织物表面的手指形状触觉传感器。
Sensors (Basel). 2014 Mar 11;14(3):4899-913. doi: 10.3390/s140304899.
8
Development of optical fiber Bragg grating force-reflection sensor system of medical application for safe minimally invasive robotic surgery.用于安全微创机器人手术的医学应用光纤布拉格光栅力反射传感器系统的开发。
Rev Sci Instrum. 2011 Jul;82(7):074301. doi: 10.1063/1.3606502.
9
Force-sensitive tactile sensor for minimal access surgery.用于微创手术的力敏触觉传感器。
Minim Invasive Ther Allied Technol. 2004 Feb;13(1):42-6. doi: 10.1080/13645700310023069.
10
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.