• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于声音定位机器人的神经形态视听传感器融合

Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

作者信息

Chan Vincent Yue-Sek, Jin Craig T, van Schaik André

机构信息

School of Electrical and Information Engineering, The University of Sydney Sydney, NSW, Australia.

出版信息

Front Neurosci. 2012 Feb 8;6:21. doi: 10.3389/fnins.2012.00021. eCollection 2012.

DOI:10.3389/fnins.2012.00021
PMID:22347165
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3274764/
Abstract

This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

摘要

本文介绍了首个具有视听(AV)传感器与神经形态传感器融合功能的机器人系统。我们在一个机器人平台上组合了一对硅耳蜗和一个硅视网膜,以使机器人能够通过自身运动和视觉反馈,利用基于自适应耳间时间差(ITD)的声音定位算法来学习声音定位。经过训练后,该机器人能够在混响环境中定位声源(白噪声或粉红噪声),方位角的均方根误差为4 - 5°。我们还研究了视听源绑定问题,并进行了一项实验,以测试基于音频事件和相应视觉事件的起始时间来匹配二者的有效性。尽管此方法简单,且背景中有大量虚假视觉事件,但在实验过程中仍有75%的时间能够做出正确匹配。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/4f065eb1e5cf/fnins-06-00021-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/eaa9e04c2f31/fnins-06-00021-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/815fc221c7b4/fnins-06-00021-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/1e26f4f91961/fnins-06-00021-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/3741db8f36ca/fnins-06-00021-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/7968042d353a/fnins-06-00021-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/6890bd1923b1/fnins-06-00021-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/0360c819afd9/fnins-06-00021-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/02582100367c/fnins-06-00021-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/30aab4f78b88/fnins-06-00021-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/35190a6ab892/fnins-06-00021-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/4f065eb1e5cf/fnins-06-00021-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/eaa9e04c2f31/fnins-06-00021-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/815fc221c7b4/fnins-06-00021-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/1e26f4f91961/fnins-06-00021-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/3741db8f36ca/fnins-06-00021-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/7968042d353a/fnins-06-00021-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/6890bd1923b1/fnins-06-00021-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/0360c819afd9/fnins-06-00021-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/02582100367c/fnins-06-00021-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/30aab4f78b88/fnins-06-00021-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/35190a6ab892/fnins-06-00021-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7907/3274764/4f065eb1e5cf/fnins-06-00021-g011.jpg

相似文献

1
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.用于声音定位机器人的神经形态视听传感器融合
Front Neurosci. 2012 Feb 8;6:21. doi: 10.3389/fnins.2012.00021. eCollection 2012.
2
Adaptive sound localization with a silicon cochlea pair.基于一对硅耳蜗的自适应声音定位
Front Neurosci. 2010 Nov 29;4:196. doi: 10.3389/fnins.2010.00196. eCollection 2010.
3
The Synthetic Moth: A Neuromorphic Approach toward Artificial Olfaction in Robots合成蛾:一种用于机器人人工嗅觉的神经形态方法
4
Olfaction and hearing based mobile robot navigation for odor/sound source search.基于嗅觉和听觉的移动机器人导航用于气味/声源搜索。
Sensors (Basel). 2011;11(2):2129-54. doi: 10.3390/s110202129. Epub 2011 Feb 11.
5
Learning to Localize Sound Sources in Visual Scenes: Analysis and Applications.学习在视觉场景中定位声源:分析与应用。
IEEE Trans Pattern Anal Mach Intell. 2021 May;43(5):1605-1619. doi: 10.1109/TPAMI.2019.2952095. Epub 2021 Apr 1.
6
Effect of task-related continuous auditory feedback during learning of tracking motion exercises.学习跟踪运动练习时与任务相关的连续听觉反馈的效果。
J Neuroeng Rehabil. 2012 Oct 10;9:79. doi: 10.1186/1743-0003-9-79.
7
Visual Sensor Fusion Based Autonomous Robotic System for Assistive Drinking.基于视觉传感器融合的辅助饮水自主机器人系统。
Sensors (Basel). 2021 Aug 11;21(16):5419. doi: 10.3390/s21165419.
8
DMMAN: A two-stage audio-visual fusion framework for sound separation and event localization.DMMAN:一种用于声音分离和事件定位的两阶段视听融合框架。
Neural Netw. 2021 Jan;133:229-239. doi: 10.1016/j.neunet.2020.10.003. Epub 2020 Nov 11.
9
Blind Audio-Visual Localization and Separation via Low-Rank and Sparsity.基于低秩稀疏的盲音频视觉定位与分离
IEEE Trans Cybern. 2020 May;50(5):2288-2301. doi: 10.1109/TCYB.2018.2883607. Epub 2018 Dec 13.
10
SoundCompass: a distributed MEMS microphone array-based sensor for sound source localization.声罗盘:一种基于分布式 MEMS 麦克风阵列的声源定位传感器。
Sensors (Basel). 2014 Jan 23;14(2):1918-49. doi: 10.3390/s140201918.

引用本文的文献

1
Event-Based Sensing and Signal Processing in the Visual, Auditory, and Olfactory Domain: A Review.基于事件的视觉、听觉和嗅觉域传感与信号处理:综述
Front Neural Circuits. 2021 May 31;15:610446. doi: 10.3389/fncir.2021.610446. eCollection 2021.
2
Sensor-Based Control for Collaborative Robots: Fundamentals, Challenges, and Opportunities.协作机器人的基于传感器的控制:基础、挑战与机遇
Front Neurorobot. 2021 Jan 7;14:576846. doi: 10.3389/fnbot.2020.576846. eCollection 2020.
3
Proprioceptive Feedback through a Neuromorphic Muscle Spindle Model.

本文引用的文献

1
An address-event vision sensor for multiple transient object detection.一种用于多个瞬态目标检测的地址事件视觉传感器。
IEEE Trans Biomed Circuits Syst. 2007 Dec;1(4):278-88. doi: 10.1109/TBCAS.2007.916031.
2
Adaptive sound localization with a silicon cochlea pair.基于一对硅耳蜗的自适应声音定位
Front Neurosci. 2010 Nov 29;4:196. doi: 10.3389/fnins.2010.00196. eCollection 2010.
通过神经形态肌肉纺锤体模型的本体感觉反馈。
Front Neurosci. 2017 Jun 14;11:341. doi: 10.3389/fnins.2017.00341. eCollection 2017.
4
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors.视觉、听觉和嗅觉传感器当前神经形态方法综述
Front Neurosci. 2016 Mar 29;10:115. doi: 10.3389/fnins.2016.00115. eCollection 2016.
5
Reconstruction of audio waveforms from spike trains of artificial cochlea models.从人工耳蜗模型的尖峰序列重建音频波形。
Front Neurosci. 2015 Oct 13;9:347. doi: 10.3389/fnins.2015.00347. eCollection 2015.