Suppr超能文献

从视觉信息推断相互作用力,无需使用物理力传感器。

Inferring Interaction Force from Visual Information without Using Physical Force Sensors.

机构信息

Department of Software and Computer Engineering, Ajou University, 206 Worldcup-ro, Yeongtong-gu, Suwon 16499, Korea.

Department of Mechanical, Robotics and Energy Engineering, Dongguk University, 30, Pildong-ro 1gil, Jung-gu, Seoul 04620, Korea.

出版信息

Sensors (Basel). 2017 Oct 26;17(11):2455. doi: 10.3390/s17112455.

Abstract

In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.

摘要

在本文中,我们提出了一种使用视觉信息而非力传感器的交互力估计方法。具体来说,我们提出了一种新颖的基于深度学习的方法,仅使用连续图像来估计目标物体的交互力,其中物体的形状被外力改变。通过视觉形状变化可以估计施加在目标上的力。但是,图像中的形状差异不是很明显。为了解决这个问题,我们提出了一种基于全连接层的循环神经网络的深度模型,该模型可以从视觉表示中模拟复杂的时间动态。广泛的评估表明,所提出的学习模型仅使用相应的连续图像就能成功地估计交互力,特别是在三个由不同材料制成的物体(海绵、PET 瓶、人体手臂和管子)的情况下。该方法预测的力与力传感器测量的力非常相似。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2142/5713494/3e78e1a66890/sensors-17-02455-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验