• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

计算模型能从人类选择性注意中学到什么?从视听单峰和跨峰视角的综述。

What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective.

作者信息

Fu Di, Weber Cornelius, Yang Guochun, Kerzel Matthias, Nan Weizhi, Barros Pablo, Wu Haiyan, Liu Xun, Wermter Stefan

机构信息

CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China.

Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.

出版信息

Front Integr Neurosci. 2020 Feb 27;14:10. doi: 10.3389/fnint.2020.00010. eCollection 2020.

DOI:10.3389/fnint.2020.00010
PMID:32174816
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7056875/
Abstract

Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.

摘要

选择性注意在从环境中获取和利用信息方面起着至关重要的作用。在过去的50年里,对选择性注意的研究一直是认知科学的核心话题。与单模态研究相比,跨模态研究更为复杂,但对于解决人类实验和计算建模中的现实世界挑战而言是必要的。尽管越来越多关于跨模态选择性注意的研究结果揭示了人类的行为模式和神经基础,但为智能计算主体带来同样的益处仍需要更好地理解。本文从心理学和认知神经科学的多学科视角回顾了单模态视觉和听觉以及跨模态视听环境中的选择性注意研究,并评估了在计算模型和机器人技术中模拟类似机制的不同方法。在这篇跨学科综述中,我们讨论了这些领域之间的差距,并从不同角度提供了关于如何在人工智能中运用心理学研究结果和理论的见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/2a1ecb58475e/fnint-14-00010-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/f306ee9fe7fb/fnint-14-00010-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/a4d338e0ecfc/fnint-14-00010-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/ff968ce19f4a/fnint-14-00010-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/36e87b7db6d7/fnint-14-00010-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/2a1ecb58475e/fnint-14-00010-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/f306ee9fe7fb/fnint-14-00010-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/a4d338e0ecfc/fnint-14-00010-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/ff968ce19f4a/fnint-14-00010-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/36e87b7db6d7/fnint-14-00010-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83f4/7056875/2a1ecb58475e/fnint-14-00010-g0005.jpg

相似文献

1
What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective.计算模型能从人类选择性注意中学到什么?从视听单峰和跨峰视角的综述。
Front Integr Neurosci. 2020 Feb 27;14:10. doi: 10.3389/fnint.2020.00010. eCollection 2020.
2
A functional MRI investigation of crossmodal interference in an audiovisual Stroop task.一项视听 Stroop 任务中跨模态干扰的功能磁共振成像研究。
PLoS One. 2019 Jan 15;14(1):e0210736. doi: 10.1371/journal.pone.0210736. eCollection 2019.
3
Crossmodal to unimodal transfer of temporal perceptual learning.跨模态到单模态的时间知觉学习迁移。
Perception. 2024 Nov;53(11-12):753-762. doi: 10.1177/03010066241270271. Epub 2024 Aug 12.
4
Supramodal executive control of attention: Evidence from unimodal and crossmodal dual conflict effects.注意力的跨模态执行控制:来自单模态和跨模态双重冲突效应的证据。
Cortex. 2020 Dec;133:266-276. doi: 10.1016/j.cortex.2020.09.018. Epub 2020 Oct 6.
5
Understanding Human Cognition Through Computational Modeling.通过计算建模理解人类认知。
Top Cogn Sci. 2024 Jul;16(3):349-376. doi: 10.1111/tops.12737. Epub 2024 May 23.
6
Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.跨模态整合增强了视听面部感知中与任务相关特征的神经表征。
Cereb Cortex. 2015 Feb;25(2):384-95. doi: 10.1093/cercor/bht228. Epub 2013 Aug 26.
7
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.选择性注意和分散性注意对儿童视听整合的影响。
Perception. 2016 May;45(5):515-526. doi: 10.1177/0301006616629025.
8
Enriched learning: behavior, brain, and computation.强化学习:行为、大脑与计算
Trends Cogn Sci. 2023 Jan;27(1):81-97. doi: 10.1016/j.tics.2022.10.007. Epub 2022 Nov 28.
9
Crossmodal integration of object features: voxel-based correlations in brain-damaged patients.物体特征的跨模态整合:脑损伤患者基于体素的相关性
Brain. 2009 Mar;132(Pt 3):671-83. doi: 10.1093/brain/awn361. Epub 2009 Feb 3.
10
Neural Correlates of Feedback Processing in Visuo-Tactile Crossmodal Paired-Associate Learning.视觉-触觉跨模态配对联想学习中反馈处理的神经关联
Front Hum Neurosci. 2018 Jul 3;12:266. doi: 10.3389/fnhum.2018.00266. eCollection 2018.

引用本文的文献

1
A Scoping Review of the Role of Attention in Tinnitus Management.注意力在耳鸣管理中作用的范围综述
Semin Hear. 2025 Mar 6;45(3-04):317-330. doi: 10.1055/s-0045-1804903. eCollection 2024 Aug.
2
The effect of transcranial random noise stimulation (tRNS) over bilateral parietal cortex in visual cross-modal conflicts.经颅随机噪声刺激(tRNS)作用于双侧顶叶皮质对视觉跨模态冲突的影响。
Sci Rep. 2025 Feb 10;15(1):4980. doi: 10.1038/s41598-025-85682-z.
3
Pain recognition and pain empathy from a human-centered AI perspective.从以人为本的人工智能视角看疼痛识别与疼痛共情

本文引用的文献

1
Five Factors that Guide Attention in Visual Search.视觉搜索中引导注意力的五个因素。
Nat Hum Behav. 2017 Mar;1(3). doi: 10.1038/s41562-017-0058. Epub 2017 Mar 8.
2
Meaning-based guidance of attention in scenes as revealed by meaning maps.语义地图揭示的场景中基于意义的注意力引导
Nat Hum Behav. 2017 Oct;1(10):743-747. doi: 10.1038/s41562-017-0208-0. Epub 2017 Sep 25.
3
Frontal and parietal alpha oscillations reflect attentional modulation of cross-modal matching.额区和顶区α振荡反映了跨模态匹配的注意力调节。
iScience. 2024 Jul 23;27(8):110570. doi: 10.1016/j.isci.2024.110570. eCollection 2024 Aug 16.
4
Exploring behavioral adjustments of proportion congruency manipulations in an Eriksen flanker task with visual and auditory distractor modalities.探讨视觉和听觉干扰模态下,Eriksen 侧抑制任务中比例一致性操纵的行为调整。
Mem Cognit. 2024 Jan;52(1):91-114. doi: 10.3758/s13421-023-01447-x. Epub 2023 Aug 7.
5
Effect of Audiovisual Cross-Modal Conflict during Working Memory Tasks: A Near-Infrared Spectroscopy Study.工作记忆任务中视听跨模态冲突的影响:一项近红外光谱研究。
Brain Sci. 2022 Mar 3;12(3):349. doi: 10.3390/brainsci12030349.
6
Making sense of periodicity glimpses in a prediction-update-loop-A computational model of attentive voice tracking.理解预测更新循环中的周期性瞥见——一种注意力语音跟踪的计算模型
J Acoust Soc Am. 2022 Feb;151(2):712. doi: 10.1121/10.0009337.
7
Listening Effort Informed Quality of Experience Evaluation.基于聆听努力的体验质量评估
Front Psychol. 2022 Jan 5;12:767840. doi: 10.3389/fpsyg.2021.767840. eCollection 2021.
8
Crossmodal Pattern Discrimination in Humans and Robots: A Visuo-Tactile Case Study.人类与机器人的跨模态模式辨别:一个视觉-触觉案例研究
Front Robot AI. 2020 Dec 23;7:540565. doi: 10.3389/frobt.2020.540565. eCollection 2020.
Sci Rep. 2019 Mar 22;9(1):5030. doi: 10.1038/s41598-019-41636-w.
4
Frequency-Following Responses to Complex Tones at Different Frequencies Reflect Different Source Configurations.对不同频率复音的频率跟随反应反映了不同的声源配置。
Front Neurosci. 2019 Feb 26;13:130. doi: 10.3389/fnins.2019.00130. eCollection 2019.
5
Modulation of phase-locked neural responses to speech during different arousal states is age-dependent.不同觉醒状态下语音锁相神经反应的调制具有年龄依赖性。
Neuroimage. 2019 Apr 1;189:734-744. doi: 10.1016/j.neuroimage.2019.01.049. Epub 2019 Jan 28.
6
Cross-Modal Attentional Context Learning for RGB-D Object Detection.跨模态注意上下文学习的 RGB-D 目标检测。
IEEE Trans Image Process. 2019 Apr;28(4):1591-1601. doi: 10.1109/TIP.2018.2878956. Epub 2018 Oct 31.
7
Enhanced Robot Speech Recognition Using Biomimetic Binaural Sound Source Localization.使用仿生双耳声源定位增强机器人语音识别。
IEEE Trans Neural Netw Learn Syst. 2019 Jan;30(1):138-150. doi: 10.1109/TNNLS.2018.2830119. Epub 2018 Jun 4.
8
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers.深度学习中的可视化分析:对新前沿领域的探索性调查
IEEE Trans Vis Comput Graph. 2018 Jun 4. doi: 10.1109/TVCG.2018.2843369.
9
Neural mechanisms for selectively tuning in to the target speaker in a naturalistic noisy situation.在自然嘈杂环境中选择性地调谐到目标说话人的神经机制。
Nat Commun. 2018 Jun 19;9(1):2405. doi: 10.1038/s41467-018-04819-z.
10
Attentional Bias in Human Category Learning: The Case of Deep Learning.人类类别学习中的注意偏向:深度学习的案例
Front Psychol. 2018 Apr 13;9:374. doi: 10.3389/fpsyg.2018.00374. eCollection 2018.