• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于对象分类的感觉运动交叉感知和交叉行为知识转移框架。

A Framework for Sensorimotor Cross-Perception and Cross-Behavior Knowledge Transfer for Object Categorization.

作者信息

Tatiya Gyan, Hosseini Ramtin, Hughes Michael C, Sinapov Jivko

机构信息

Department of Computer Science, Tufts University, Medford, MA, United States.

出版信息

Front Robot AI. 2020 Oct 9;7:522141. doi: 10.3389/frobt.2020.522141. eCollection 2020.

DOI:10.3389/frobt.2020.522141
PMID:33501303
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7805839/
Abstract

From an early age, humans learn to develop an intuition for the physical nature of the objects around them by using exploratory behaviors. Such exploration provides observations of how objects feel, sound, look, and move as a result of actions applied on them. Previous works in robotics have shown that robots can also use such behaviors (e.g., lifting, pressing, shaking) to infer object properties that camera input alone cannot detect. Such learned representations are specific to each individual robot and cannot currently be transferred directly to another robot with different sensors and actions. Moreover, sensor failure can cause a robot to lose a specific sensory modality which may prevent it from using perceptual models that require it as input. To address these limitations, we propose a framework for knowledge transfer across behaviors and sensory modalities such that: (1) knowledge can be transferred from one or more robots to another, and, (2) knowledge can be transferred from one or more sensory modalities to another. We propose two different models for transfer based on variational auto-encoders and encoder-decoder networks. The main hypothesis behind our approach is that if two or more robots share multi-sensory object observations of a shared set of objects, then those observations can be used to establish mappings between multiple features spaces, each corresponding to a combination of an exploratory behavior and a sensory modality. We evaluate our approach on a category recognition task using a dataset in which a robot used 9 behaviors, coupled with 4 sensory modalities, performed multiple times on 100 objects. The results indicate that sensorimotor knowledge about objects can be transferred both across behaviors and across sensory modalities, such that a new robot (or the same robot, but with a different set of sensors) can bootstrap its category recognition models without having to exhaustively explore the full set of objects.

摘要

从幼年起,人类就通过探索行为来培养对周围物体物理性质的直觉。这种探索提供了关于物体在受到作用时的触感、声音、外观和运动方式的观察。机器人领域的先前研究表明,机器人也可以利用此类行为(例如,举起、按压、摇晃)来推断仅靠摄像头输入无法检测到的物体属性。这样学到的表征是每个机器人特有的,目前无法直接转移到具有不同传感器和行为的另一个机器人上。此外,传感器故障可能导致机器人失去特定的传感模态,这可能会阻止它使用需要该传感模态作为输入的感知模型。为了解决这些限制,我们提出了一个跨行为和传感模态进行知识转移的框架,使得:(1)知识可以从一个或多个机器人转移到另一个机器人,并且,(2)知识可以从一种或多种传感模态转移到另一种传感模态。我们基于变分自编码器和编码器 - 解码器网络提出了两种不同的转移模型。我们方法背后的主要假设是,如果两个或更多机器人共享一组共享物体的多感官观察结果,那么这些观察结果可用于在多个特征空间之间建立映射,每个特征空间对应于一种探索行为和一种传感模态的组合。我们使用一个数据集在类别识别任务上评估我们的方法,在该数据集中,一个机器人使用9种行为,并结合4种传感模态,对100个物体进行了多次操作。结果表明,关于物体的感觉运动知识可以跨行为和跨传感模态进行转移,这样一个新机器人(或者同一个机器人,但使用不同的传感器组)可以引导其类别识别模型,而无需详尽地探索整个物体集。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/f12445ac5f52/frobt-07-522141-g0024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/f7fc94a13654/frobt-07-522141-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/3c68297c5e81/frobt-07-522141-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/d4dd3d0fa6bd/frobt-07-522141-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/24821e3948ca/frobt-07-522141-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/332b00486f76/frobt-07-522141-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/e7128743ae03/frobt-07-522141-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/bc29f5b80032/frobt-07-522141-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/7993a4d7aa3a/frobt-07-522141-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/271c1ae2e1b7/frobt-07-522141-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/641805d3a556/frobt-07-522141-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/aac63c1416d7/frobt-07-522141-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/9b91d8fc2977/frobt-07-522141-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/b2ea0dcac55e/frobt-07-522141-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/e2b4a69fbd50/frobt-07-522141-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/b893d15d091a/frobt-07-522141-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/42754a1abf57/frobt-07-522141-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/aef19a44b18a/frobt-07-522141-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/043d4670f959/frobt-07-522141-g0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/e0e2e1480b24/frobt-07-522141-g0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/843fb2b43b69/frobt-07-522141-g0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/74e075de4f38/frobt-07-522141-g0021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/580c6a74aa1c/frobt-07-522141-g0022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/29feaf258469/frobt-07-522141-g0023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/f12445ac5f52/frobt-07-522141-g0024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/f7fc94a13654/frobt-07-522141-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/3c68297c5e81/frobt-07-522141-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/d4dd3d0fa6bd/frobt-07-522141-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/24821e3948ca/frobt-07-522141-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/332b00486f76/frobt-07-522141-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/e7128743ae03/frobt-07-522141-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/bc29f5b80032/frobt-07-522141-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/7993a4d7aa3a/frobt-07-522141-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/271c1ae2e1b7/frobt-07-522141-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/641805d3a556/frobt-07-522141-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/aac63c1416d7/frobt-07-522141-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/9b91d8fc2977/frobt-07-522141-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/b2ea0dcac55e/frobt-07-522141-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/e2b4a69fbd50/frobt-07-522141-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/b893d15d091a/frobt-07-522141-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/42754a1abf57/frobt-07-522141-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/aef19a44b18a/frobt-07-522141-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/043d4670f959/frobt-07-522141-g0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/e0e2e1480b24/frobt-07-522141-g0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/843fb2b43b69/frobt-07-522141-g0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/74e075de4f38/frobt-07-522141-g0021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/580c6a74aa1c/frobt-07-522141-g0022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/29feaf258469/frobt-07-522141-g0023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/33b4/7805839/f12445ac5f52/frobt-07-522141-g0024.jpg

相似文献

1
A Framework for Sensorimotor Cross-Perception and Cross-Behavior Knowledge Transfer for Object Categorization.用于对象分类的感觉运动交叉感知和交叉行为知识转移框架。
Front Robot AI. 2020 Oct 9;7:522141. doi: 10.3389/frobt.2020.522141. eCollection 2020.
2
Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies.物体类别知识在视觉和触觉模态间的迁移:实验和计算研究。
Cognition. 2013 Feb;126(2):135-48. doi: 10.1016/j.cognition.2012.08.005. Epub 2012 Oct 25.
3
Learning efficient haptic shape exploration with a rigid tactile sensor array.使用刚性触觉传感器阵列学习高效的触觉形状探索。
PLoS One. 2020 Jan 2;15(1):e0226880. doi: 10.1371/journal.pone.0226880. eCollection 2020.
4
Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects.用于学习新物体触觉属性的主动先验触觉知识转移
Sensors (Basel). 2018 Feb 21;18(2):634. doi: 10.3390/s18020634.
5
Performance of a Computational Model of the Mammalian Olfactory System哺乳动物嗅觉系统计算模型的性能
6
Active Haptic Perception in Robots: A Review.机器人中的主动触觉感知:综述
Front Neurorobot. 2019 Jul 17;13:53. doi: 10.3389/fnbot.2019.00053. eCollection 2019.
7
From Multi-Modal Property Dataset to Robot-Centric Conceptual Knowledge About Household Objects.从多模态属性数据集到以机器人为中心的关于家居物品的概念知识。
Front Robot AI. 2021 Apr 15;8:476084. doi: 10.3389/frobt.2021.476084. eCollection 2021.
8
Tactile Object Recognition for Humanoid Robots Using New Designed Piezoresistive Tactile Sensor and DCNN.使用新型压阻式触觉传感器和 DCNN 进行人形机器人触觉物体识别
Sensors (Basel). 2021 Sep 8;21(18):6024. doi: 10.3390/s21186024.
9
Learning the signatures of the human grasp using a scalable tactile glove.使用可扩展的触觉手套学习人类抓握的特征。
Nature. 2019 May;569(7758):698-702. doi: 10.1038/s41586-019-1234-z. Epub 2019 May 29.
10
Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots.基于贝叶斯生成模型的跨情境学习在机器人多模态类别与单词学习中的应用
Front Neurorobot. 2017 Dec 19;11:66. doi: 10.3389/fnbot.2017.00066. eCollection 2017.

本文引用的文献

1
Open-Environment Robotic Acoustic Perception for Object Recognition.用于目标识别的开放环境机器人声学感知
Front Neurorobot. 2019 Nov 22;13:96. doi: 10.3389/fnbot.2019.00096. eCollection 2019.
2
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.基于多模态分层狄利克雷过程的机器人主动感知
Front Neurorobot. 2018 May 22;12:22. doi: 10.3389/fnbot.2018.00022. eCollection 2018.
3
Kernel Manifold Alignment for Domain Adaptation.用于域适应的核流形对齐
PLoS One. 2016 Feb 12;11(2):e0148655. doi: 10.1371/journal.pone.0148655. eCollection 2016.
4
Coupled Deep Autoencoder for Single Image Super-Resolution.基于深度自动编码器的单图像超分辨率方法。
IEEE Trans Cybern. 2017 Jan;47(1):27-37. doi: 10.1109/TCYB.2015.2501373. Epub 2015 Nov 26.
5
Bayesian exploration for intelligent identification of textures.贝叶斯探索用于纹理的智能识别。
Front Neurorobot. 2012 Jun 18;6:4. doi: 10.3389/fnbot.2012.00004. eCollection 2012.
6
Modality exclusivity norms for 423 object properties.423种物体属性的模态排他性规范。
Behav Res Methods. 2009 May;41(2):558-64. doi: 10.3758/BRM.41.2.558.
7
Benefits of multisensory learning.多感官学习的益处。
Trends Cogn Sci. 2008 Nov;12(11):411-7. doi: 10.1016/j.tics.2008.07.006.
8
Multisensory exploration and object individuation in infancy.婴儿期的多感官探索与物体个体化
Dev Psychol. 2007 Mar;43(2):479-95. doi: 10.1037/0012-1649.43.2.479.
9
Reducing the dimensionality of data with neural networks.使用神经网络降低数据维度。
Science. 2006 Jul 28;313(5786):504-7. doi: 10.1126/science.1127647.
10
Merging the senses into a robust percept.将各种感官整合为一个可靠的感知。
Trends Cogn Sci. 2004 Apr;8(4):162-9. doi: 10.1016/j.tics.2004.02.002.