Suppr超能文献

通过触摸事件的视觉-运动整合对机器人身体图式进行脑启发式编码

Brain-Inspired Coding of Robot Body Schema Through Visuo-Motor Integration of Touched Events.

作者信息

Pugach Ganna, Pitti Alexandre, Tolochko Olga, Gaussier Philippe

机构信息

ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France.

Faculty of Electric Power Engineering and Automation, National Technical University of Ukraine Kyiv Polytechnic Institute, Kyiv, Ukraine.

出版信息

Front Neurorobot. 2019 Mar 7;13:5. doi: 10.3389/fnbot.2019.00005. eCollection 2019.

Abstract

Representing objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, arm-, or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand- and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space.

摘要

在空间中表征物体是困难的,因为感觉运动事件锚定在不同的参考系中,这些参考系可以是以眼睛、手臂或目标为中心的。在大脑中,顶叶皮质中的增益场(GF)神经元参与计算必要的空间变换,以对齐触觉、视觉和本体感觉信号。在伸手抓取任务中,这些GF神经元利用一种基于乘法相互作用的机制,将来自手部同时触碰的事件与视觉和本体感觉信息结合起来。通过这样做,它们可以推断出新的参考系,以动态地表示身体部位在视觉空间中的位置(即身体图式)和附近目标(即其个人周边空间)。在这方面,我们提出了一种基于GF神经元的神经模型,用于将触觉事件与手臂姿势和视觉位置整合起来,以在视觉空间中构建以手部和目标为中心的感受野。在使用人造皮肤的机器人实验中,我们展示了我们的神经架构如何再现顶叶神经元的行为:(1)在没有任何视觉标签的情况下动态编码我们机器人手臂的身体图式;(2)估计目标相对于它的相对方向和距离。我们展示了触觉信息如何促进视觉和本体感觉信号的整合,以构建身体空间。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7641/6416207/f93d08b8de3e/fnbot-13-00005-g0002.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验