• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人-机交互中的触觉信号的跨模态重建。

Cross-Modal Reconstruction for Tactile Signal in Human-Robot Interaction.

机构信息

Key Laboratory of Broadband Wireless Communication and Sensor Network Technology, Ministry of Education, School of Communication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China.

出版信息

Sensors (Basel). 2022 Aug 29;22(17):6517. doi: 10.3390/s22176517.

DOI:10.3390/s22176517
PMID:36080977
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9460542/
Abstract

A human can infer the magnitude of interaction force solely based on visual information because of prior knowledge in human-robot interaction (HRI). A method of reconstructing tactile information through cross-modal signal processing is proposed in this paper. In our method, visual information is added as an auxiliary source to tactile information. In this case, the receiver is only able to determine the tactile interaction force from the visual information provided. In our method, we first process groups of pictures (GOPs) and treat them as the input. Secondly, we use the low-rank foreground-based attention mechanism (LAM) to detect regions of interest (ROIs). Finally, we propose a linear regression convolutional neural network (LRCNN) to infer contact force in video frames. The experimental results show that our cross-modal reconstruction is indeed feasible. Furthermore, compared to other work, our method is able to reduce the complexity of the network and improve the material identification accuracy.

摘要

由于人类在人机交互(HRI)方面的先验知识,人类可以仅根据视觉信息推断相互作用力的大小。本文提出了一种通过跨模态信号处理重建触觉信息的方法。在我们的方法中,视觉信息被添加为触觉信息的辅助源。在这种情况下,接收器只能从提供的视觉信息中确定触觉相互作用力。在我们的方法中,我们首先处理图片组(GOP)并将其作为输入。其次,我们使用基于低秩前景的注意机制(LAM)来检测感兴趣区域(ROI)。最后,我们提出了一种线性回归卷积神经网络(LRCNN)来推断视频帧中的接触力。实验结果表明,我们的跨模态重建确实是可行的。此外,与其他工作相比,我们的方法能够降低网络的复杂性并提高材料识别的准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/ccc210935db6/sensors-22-06517-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/d7ae63bf1ad0/sensors-22-06517-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/5522eba4245d/sensors-22-06517-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/3647ee8b32e3/sensors-22-06517-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/72c0d3586560/sensors-22-06517-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/29cd0cbbb38f/sensors-22-06517-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/99da5f4e13ea/sensors-22-06517-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/525aa457c528/sensors-22-06517-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/d4cb88f4172b/sensors-22-06517-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/b86619fa9e46/sensors-22-06517-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/9a4b64a47537/sensors-22-06517-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/5aa81898e457/sensors-22-06517-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/3299fc2cfb60/sensors-22-06517-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/ccc210935db6/sensors-22-06517-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/d7ae63bf1ad0/sensors-22-06517-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/5522eba4245d/sensors-22-06517-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/3647ee8b32e3/sensors-22-06517-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/72c0d3586560/sensors-22-06517-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/29cd0cbbb38f/sensors-22-06517-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/99da5f4e13ea/sensors-22-06517-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/525aa457c528/sensors-22-06517-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/d4cb88f4172b/sensors-22-06517-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/b86619fa9e46/sensors-22-06517-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/9a4b64a47537/sensors-22-06517-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/5aa81898e457/sensors-22-06517-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/3299fc2cfb60/sensors-22-06517-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/ccc210935db6/sensors-22-06517-g013.jpg

相似文献

1
Cross-Modal Reconstruction for Tactile Signal in Human-Robot Interaction.人-机交互中的触觉信号的跨模态重建。
Sensors (Basel). 2022 Aug 29;22(17):6517. doi: 10.3390/s22176517.
2
Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition.从视觉到触觉的迁移学习:用于视触 3D 物体识别的混合深度卷积神经网络。
Sensors (Basel). 2020 Dec 27;21(1):113. doi: 10.3390/s21010113.
3
Behavioural Models of Risk-Taking in Human-Robot Tactile Interactions.人类-机器人触觉交互中的风险行为模型。
Sensors (Basel). 2023 May 16;23(10):4786. doi: 10.3390/s23104786.
4
Integrating information from vision and touch: a neural network modeling study.整合视觉与触觉信息:一项神经网络建模研究。
IEEE Trans Inf Technol Biomed. 2010 May;14(3):598-612. doi: 10.1109/TITB.2010.2040750. Epub 2010 Feb 2.
5
An Open-Environment Tactile Sensing System: Toward Simple and Efficient Material Identification.一种开放式环境触觉传感系统:实现简单高效的材料识别。
Adv Mater. 2022 Jul;34(29):e2203073. doi: 10.1002/adma.202203073. Epub 2022 Jun 8.
6
Using 3D Convolutional Neural Networks for Tactile Object Recognition with Robotic Palpation.使用 3D 卷积神经网络进行机器人触诊的触觉物体识别。
Sensors (Basel). 2019 Dec 5;19(24):5356. doi: 10.3390/s19245356.
7
Perception of Tactile Directionality via Artificial Fingerpad Deformation and Convolutional Neural Networks.通过人工指垫变形和卷积神经网络感知触觉方向性
IEEE Trans Haptics. 2020 Oct-Dec;13(4):831-839. doi: 10.1109/TOH.2020.2975555. Epub 2020 Dec 25.
8
Neuromorphic Tactile Edge Orientation Classification in an Unsupervised Spiking Neural Network.无监督尖峰神经网络中的神经形态触觉边缘方向分类。
Sensors (Basel). 2022 Sep 15;22(18):6998. doi: 10.3390/s22186998.
9
Toward a tactile language for human-robot interaction: two studies of tacton learning and performance.迈向用于人机交互的触觉语言:两项关于触觉单元学习与性能的研究。
Hum Factors. 2015 May;57(3):471-90. doi: 10.1177/0018720814548063. Epub 2014 Aug 28.
10
Spatial Calibration of Humanoid Robot Flexible Tactile Skin for Human-Robot Interaction.仿人机器人柔性触觉皮肤的空间校准用于人机交互。
Sensors (Basel). 2023 May 8;23(9):4569. doi: 10.3390/s23094569.

引用本文的文献

1
Cross-Modal Contrastive Hashing Retrieval for Infrared Video and EEG.基于跨模态对比散列的红外视频与 EEG 检索。
Sensors (Basel). 2022 Nov 14;22(22):8804. doi: 10.3390/s22228804.

本文引用的文献

1
An Efficient Three-Dimensional Convolutional Neural Network for Inferring Physical Interaction Force from Video.一种用于从视频中推断物理相互作用力的高效三维卷积神经网络。
Sensors (Basel). 2019 Aug 17;19(16):3579. doi: 10.3390/s19163579.
2
Squeeze-and-Excitation Networks.挤压激励网络。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.
3
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.从视觉信息推断相互作用力,无需使用物理力传感器。
Sensors (Basel). 2017 Oct 26;17(11):2455. doi: 10.3390/s17112455.
4
RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.RPCA-KFE:基于鲁棒主成分分析的视频关键帧提取。
IEEE Trans Image Process. 2015 Nov;24(11):3742-53. doi: 10.1109/TIP.2015.2445572. Epub 2015 Jun 15.
5
Robust principal component analysis based on maximum correntropy criterion.基于最大相关熵准则的鲁棒主成分分析。
IEEE Trans Image Process. 2011 Jun;20(6):1485-94. doi: 10.1109/TIP.2010.2103949. Epub 2011 Jan 6.
6
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.