Suppr超能文献

基于视觉的触觉传感器机制,使用深度学习估计接触位置和力分布。

Vision-Based Tactile Sensor Mechanism for the Estimation of Contact Position and Force Distribution Using Deep Learning.

机构信息

Information and Communication Engineering, Inha University, 100 Inharo, Nam-gu, Incheon 22212, Korea.

VisionIn Inc. Global R&D Center, 704 Ace Gasan Tower, 121 Digital-ro, Geumcheon-gu, Seoul 08505, Korea.

出版信息

Sensors (Basel). 2021 Mar 9;21(5):1920. doi: 10.3390/s21051920.

Abstract

This work describes the development of a vision-based tactile sensor system that utilizes the image-based information of the tactile sensor in conjunction with input loads at various motions to train the neural network for the estimation of tactile contact position, area, and force distribution. The current study also addresses pragmatic aspects, such as choice of the thickness and materials for the tactile fingertips and surface tendency, etc. The overall vision-based tactile sensor equipment interacts with an actuating motion controller, force gauge, and control PC (personal computer) with a LabVIEW software on it. The image acquisition was carried out using a compact stereo camera setup mounted inside the elastic body to observe and measure the amount of deformation by the motion and input load. The vision-based tactile sensor test bench was employed to collect the output contact position, angle, and force distribution caused by various randomly considered input loads for motion in , , directions and RxRy rotational motion. The retrieved image information, contact position, area, and force distribution from different input loads with specified 3D position and angle are utilized for deep learning. A convolutional neural network VGG-16 classification modelhas been modified to a regression network model and transfer learning was applied to suit the regression task of estimating contact position and force distribution. Several experiments were carried out using thick and thin sized tactile sensors with various shapes, such as circle, square, hexagon, for better validation of the predicted contact position, contact area, and force distribution.

摘要

本工作描述了一种基于视觉的触觉传感器系统的开发,该系统利用触觉传感器的基于图像的信息以及在各种运动中的输入负载来训练神经网络,以估计触觉接触位置、面积和力分布。目前的研究还解决了实际方面的问题,例如选择触觉指尖的厚度和材料以及表面倾向等。整体基于视觉的触觉传感器设备与致动运动控制器、力规和控制 PC(个人计算机)交互,其中安装了带有 LabVIEW 软件的控制 PC。使用安装在弹性体内部的紧凑立体相机设置来进行图像采集,以观察和测量运动和输入负载引起的变形量。基于视觉的触觉传感器测试台用于收集由不同的随机考虑的输入负载引起的输出接触位置、角度和力分布,这些输入负载用于 、 和 RxRy 旋转运动。从不同的输入负载中检索的图像信息、接触位置、面积和力分布与指定的 3D 位置和角度一起用于深度学习。修改了卷积神经网络 VGG-16 分类模型以成为回归网络模型,并应用迁移学习以适应估计接触位置和力分布的回归任务。使用各种形状(例如圆形、方形、六边形)的厚和薄尺寸的触觉传感器进行了多次实验,以更好地验证预测的接触位置、接触面积和力分布。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5696/7967204/b5f361236564/sensors-21-01920-g0A1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验