Suppr超能文献

人-机交互中的触觉信号的跨模态重建。

Cross-Modal Reconstruction for Tactile Signal in Human-Robot Interaction.

机构信息

Key Laboratory of Broadband Wireless Communication and Sensor Network Technology, Ministry of Education, School of Communication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China.

出版信息

Sensors (Basel). 2022 Aug 29;22(17):6517. doi: 10.3390/s22176517.

Abstract

A human can infer the magnitude of interaction force solely based on visual information because of prior knowledge in human-robot interaction (HRI). A method of reconstructing tactile information through cross-modal signal processing is proposed in this paper. In our method, visual information is added as an auxiliary source to tactile information. In this case, the receiver is only able to determine the tactile interaction force from the visual information provided. In our method, we first process groups of pictures (GOPs) and treat them as the input. Secondly, we use the low-rank foreground-based attention mechanism (LAM) to detect regions of interest (ROIs). Finally, we propose a linear regression convolutional neural network (LRCNN) to infer contact force in video frames. The experimental results show that our cross-modal reconstruction is indeed feasible. Furthermore, compared to other work, our method is able to reduce the complexity of the network and improve the material identification accuracy.

摘要

由于人类在人机交互(HRI)方面的先验知识,人类可以仅根据视觉信息推断相互作用力的大小。本文提出了一种通过跨模态信号处理重建触觉信息的方法。在我们的方法中,视觉信息被添加为触觉信息的辅助源。在这种情况下,接收器只能从提供的视觉信息中确定触觉相互作用力。在我们的方法中,我们首先处理图片组(GOP)并将其作为输入。其次,我们使用基于低秩前景的注意机制(LAM)来检测感兴趣区域(ROI)。最后,我们提出了一种线性回归卷积神经网络(LRCNN)来推断视频帧中的接触力。实验结果表明,我们的跨模态重建确实是可行的。此外,与其他工作相比,我们的方法能够降低网络的复杂性并提高材料识别的准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/831b/9460542/d7ae63bf1ad0/sensors-22-06517-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验