• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

受大脑多感官整合启发的多模态目标分类模型。

Multimodal Object Classification Models Inspired by Multisensory Integration in the Brain.

作者信息

Amerineni Rajesh, Gupta Resh S, Gupta Lalit

机构信息

Department of Electrical and Computer Engineering, Southern Illinois University, Carbondale, IL 62901, USA.

Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA.

出版信息

Brain Sci. 2019 Jan 2;9(1):3. doi: 10.3390/brainsci9010003.

DOI:10.3390/brainsci9010003
PMID:30609705
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6356735/
Abstract

Two multimodal classification models aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli are introduced. The feature-integrating model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The decision-integrating model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classifier models are implemented using multilayer perceptrons and multivariate statistical classifiers. Experiments involving the classification of noisy and attenuated auditory and visual representations of ten digits are designed to demonstrate the properties of the multimodal classifiers and to compare the performances of multimodal and unimodal classifiers. The experimental results show that the multimodal classification systems exhibit an important aspect of the "inverse effectiveness principle" by yielding significantly higher classification accuracies when compared with those of the unimodal classifiers. Furthermore, the flexibility offered by the generalized models enables the simulations and evaluations of various combinations of multimodal stimuli and classifiers under varying uncertainty conditions.

摘要

本文介绍了两种多模态分类模型,旨在通过整合语义一致的单模态刺激来增强目标分类。受皮层下上丘多感官整合的启发,特征整合模型将单模态特征进行组合,随后由多模态分类器进行分类。受初级皮层区域整合的启发,决策整合模型使用单模态分类器对单模态刺激进行独立分类,并使用多模态分类器对组合决策进行分类。多模态分类器模型使用多层感知器和多元统计分类器来实现。设计了涉及对十个数字的噪声和衰减听觉及视觉表示进行分类的实验,以证明多模态分类器的特性,并比较多模态和单模态分类器的性能。实验结果表明,与单模态分类器相比,多模态分类系统在分类准确率上显著更高,展现出了“逆有效性原则”的一个重要方面。此外,广义模型所提供的灵活性使得在不同不确定性条件下对多模态刺激和分类器的各种组合进行模拟和评估成为可能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/e19adebaf2e0/brainsci-09-00003-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/e86b4bf01d62/brainsci-09-00003-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/7bd8bb067d47/brainsci-09-00003-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/65557760eba3/brainsci-09-00003-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/41e50d464b06/brainsci-09-00003-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/620a6e7743a2/brainsci-09-00003-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/2ca72ea8497d/brainsci-09-00003-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/714f74cc1ec8/brainsci-09-00003-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/0b940cf8805a/brainsci-09-00003-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/d580621b974f/brainsci-09-00003-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/30e3866814b0/brainsci-09-00003-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/6ab89fb914f5/brainsci-09-00003-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/e19adebaf2e0/brainsci-09-00003-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/e86b4bf01d62/brainsci-09-00003-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/7bd8bb067d47/brainsci-09-00003-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/65557760eba3/brainsci-09-00003-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/41e50d464b06/brainsci-09-00003-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/620a6e7743a2/brainsci-09-00003-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/2ca72ea8497d/brainsci-09-00003-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/714f74cc1ec8/brainsci-09-00003-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/0b940cf8805a/brainsci-09-00003-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/d580621b974f/brainsci-09-00003-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/30e3866814b0/brainsci-09-00003-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/6ab89fb914f5/brainsci-09-00003-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f8b9/6356735/e19adebaf2e0/brainsci-09-00003-g012.jpg

相似文献

1
Multimodal Object Classification Models Inspired by Multisensory Integration in the Brain.受大脑多感官整合启发的多模态目标分类模型。
Brain Sci. 2019 Jan 2;9(1):3. doi: 10.3390/brainsci9010003.
2
From Near-Optimal Bayesian Integration to Neuromorphic Hardware: A Neural Network Model of Multisensory Integration.从近似最优贝叶斯整合到神经形态硬件:一种多感官整合的神经网络模型
Front Neurorobot. 2020 May 15;14:29. doi: 10.3389/fnbot.2020.00029. eCollection 2020.
3
Representation and integration of multiple sensory inputs in primate superior colliculus.灵长类动物上丘中多种感觉输入的表征与整合
J Neurophysiol. 1996 Aug;76(2):1246-66. doi: 10.1152/jn.1996.76.2.1246.
4
Multimodal Integration of Brain Images for MRI-Based Diagnosis in Schizophrenia.用于精神分裂症基于磁共振成像的诊断的脑图像多模态整合
Front Neurosci. 2019 Nov 7;13:1203. doi: 10.3389/fnins.2019.01203. eCollection 2019.
5
Pure tones modulate the representation of orientation and direction in the primary visual cortex.纯音调节初级视皮层中朝向和方向的表示。
J Neurophysiol. 2019 Jun 1;121(6):2202-2214. doi: 10.1152/jn.00069.2019. Epub 2019 Apr 10.
6
Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study.人类多模态物体识别过程中的视听整合:一项行为学与电生理学研究
J Cogn Neurosci. 1999 Sep;11(5):473-90. doi: 10.1162/089892999563544.
7
Neurocomputational approaches to modelling multisensory integration in the brain: a review.大脑多感官整合建模的神经计算方法:综述
Neural Netw. 2014 Dec;60:141-65. doi: 10.1016/j.neunet.2014.08.003. Epub 2014 Aug 23.
8
Multimodal Integration of Spatial Information: The Influence of Object-Related Factors and Self-Reported Strategies.空间信息的多模态整合:与物体相关因素及自我报告策略的影响
Front Psychol. 2016 Sep 21;7:1443. doi: 10.3389/fpsyg.2016.01443. eCollection 2016.
9
A theoretical study of multisensory integration in the superior colliculus by a neural network model.通过神经网络模型对上丘多感觉整合的理论研究。
Neural Netw. 2008 Aug;21(6):817-29. doi: 10.1016/j.neunet.2008.06.003. Epub 2008 Jun 22.
10
Using Bayes' rule to model multisensory enhancement in the superior colliculus.使用贝叶斯法则对中脑上丘的多感觉增强进行建模。
Neural Comput. 2000 May;12(5):1165-87. doi: 10.1162/089976600300015547.

引用本文的文献

1
Hearing temperatures: employing machine learning for elucidating the cross-modal perception of thermal properties through audition.听觉温度:运用机器学习通过听觉阐明热特性的跨模态感知。
Front Psychol. 2024 Aug 2;15:1353490. doi: 10.3389/fpsyg.2024.1353490. eCollection 2024.
2
Multidomain Convolution Neural Network Models for Improved Event-Related Potential Classification.多领域卷积神经网络模型提高事件相关电位分类。
Sensors (Basel). 2023 May 11;23(10):4656. doi: 10.3390/s23104656.
3
Fusion Models for Generalized Classification of Multi-Axial Human Movement: Validation in Sport Performance.

本文引用的文献

1
Pairwise diversity ranking of polychotomous features for ensemble physiological signal classifiers.用于集成生理信号分类器的多分类特征的成对多样性排序
Proc Inst Mech Eng H. 2013 Jun;227(6):655-62. doi: 10.1177/0954411913480621. Epub 2013 Apr 4.
2
Attention and the multiple stages of multisensory integration: A review of audiovisual studies.注意力与多感官整合的多个阶段:视听研究综述
Acta Psychol (Amst). 2010 Jul;134(3):372-84. doi: 10.1016/j.actpsy.2010.03.010. Epub 2010 Apr 27.
3
The principle of inverse effectiveness in multisensory integration: some statistical considerations.
融合模型在多轴向人体运动广义分类中的应用:在运动表现中的验证。
Sensors (Basel). 2021 Dec 16;21(24):8409. doi: 10.3390/s21248409.
4
CINET: A Brain-Inspired Deep Learning Context-Integrating Neural Network Model for Resolving Ambiguous Stimuli.CINET:一种受大脑启发的用于解决模糊刺激的深度学习上下文整合神经网络模型。
Brain Sci. 2020 Jan 24;10(2):64. doi: 10.3390/brainsci10020064.
多感官整合中的逆有效性原则:一些统计学考量
Brain Topogr. 2009 May;21(3-4):168-76. doi: 10.1007/s10548-009-0097-2. Epub 2009 Apr 29.
4
Multisensory integration in the superior colliculus: a neural network model.上丘中的多感官整合:一种神经网络模型。
J Comput Neurosci. 2009 Feb;26(1):55-73. doi: 10.1007/s10827-008-0096-4. Epub 2008 May 14.
5
Multisensory integration: current issues from the perspective of the single neuron.多感官整合:从单个神经元角度看当前问题
Nat Rev Neurosci. 2008 Apr;9(4):255-66. doi: 10.1038/nrn2331.
6
Multisensory interplay reveals crossmodal influences on 'sensory-specific' brain regions, neural responses, and judgments.多感官相互作用揭示了跨模态对“特定感觉”脑区、神经反应和判断的影响。
Neuron. 2008 Jan 10;57(1):11-23. doi: 10.1016/j.neuron.2007.12.013.
7
Is neocortex essentially multisensory?新皮层本质上是多感觉的吗?
Trends Cogn Sci. 2006 Jun;10(6):278-85. doi: 10.1016/j.tics.2006.04.008. Epub 2006 May 18.
8
Multichannel fusion models for the parametric classification of differential brain activity.用于脑电活动参数分类的多通道融合模型。
IEEE Trans Biomed Eng. 2005 Nov;52(11):1869-81. doi: 10.1109/TBME.2005.856272.
9
Merging the senses into a robust percept.将各种感官整合为一个可靠的感知。
Trends Cogn Sci. 2004 Apr;8(4):162-9. doi: 10.1016/j.tics.2004.02.002.
10
The ventriloquist effect results from near-optimal bimodal integration.腹语效应源于近乎最优的双峰整合。
Curr Biol. 2004 Feb 3;14(3):257-62. doi: 10.1016/j.cub.2004.01.029.