• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多感官刺激促进虚拟现实中一项困难的全局运动任务的低水平感知学习。

Multisensory stimuli facilitate low-level perceptual learning on a difficult global motion task in virtual reality.

作者信息

Fromm Catherine A, Maddox Ross K, Polonenko Melissa J, Huxlin Krystel R, Diaz Gabriel J

机构信息

Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, New York, USA.

Department of Brain and Cognitive Science, University of Rochester, Rochester, New York, USA.

出版信息

PLoS One. 2025 Mar 4;20(3):e0319007. doi: 10.1371/journal.pone.0319007. eCollection 2025.

DOI:10.1371/journal.pone.0319007
PMID:40036211
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11878941/
Abstract

The present study investigates the feasibility of inducing visual perceptual learning on a peripheral, global direction discrimination and integration task in virtual reality, and tests whether audio-visual multisensory training induces faster or greater visual learning than unisensory visual training. Seventeen participants completed a 10-day training experiment wherein they repeatedly performed a 4-alternative, combined visual global-motion and direction discrimination task at 10° azimuth/elevation in a virtual environment. A visual-only group of 8 participants was trained using a unimodal visual stimulus. An audio-visual group of 9 participants underwent training whereby the visual stimulus was always paired with a pulsed, white-noise auditory cue that simulated auditory motion in a direction consistent with the horizontal component of the visual motion stimulus. Our results reveal that, for both groups, learning occurred and transferred to untrained locations. For the AV group, there was an additional performance benefit to training from the AV cue to horizontal motion. This benefit extended into the unisensory post-test, where the auditory cue was removed. However, this benefit did not generalize spatially to previously untrained areas. This spatial specificity suggests that AV learning may have occurred at a lower level in the visual pathways, compared to visual-only learning.

摘要

本研究探讨了在虚拟现实中对周边全局方向辨别与整合任务诱导视觉感知学习的可行性,并测试视听多感官训练是否比单感官视觉训练能诱导更快或更大程度的视觉学习。17名参与者完成了一项为期10天的训练实验,期间他们在虚拟环境中于10°方位角/仰角处反复执行一项4选1的、结合视觉全局运动和方向辨别任务。8名参与者组成的仅视觉组使用单峰视觉刺激进行训练。9名参与者组成的视听组接受训练,其中视觉刺激始终与一个脉冲式白噪声听觉线索配对,该线索模拟与视觉运动刺激的水平分量一致方向的听觉运动。我们的结果表明,对于两组而言,学习均发生并转移到未训练的位置。对于视听组,从视听线索到水平运动的训练还有额外的表现优势。这种优势延伸到了单感官测试后阶段,此时听觉线索已被移除。然而,这种优势在空间上并未推广到先前未训练的区域。这种空间特异性表明,与仅视觉学习相比,视听学习可能在视觉通路的较低水平发生。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/06749c30a5c7/pone.0319007.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/9eef6f6a9f94/pone.0319007.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/b9737b6243d9/pone.0319007.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/45c40587bc88/pone.0319007.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/06749c30a5c7/pone.0319007.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/9eef6f6a9f94/pone.0319007.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/b9737b6243d9/pone.0319007.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/45c40587bc88/pone.0319007.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f7b/11878941/06749c30a5c7/pone.0319007.g004.jpg

相似文献

1
Multisensory stimuli facilitate low-level perceptual learning on a difficult global motion task in virtual reality.多感官刺激促进虚拟现实中一项困难的全局运动任务的低水平感知学习。
PLoS One. 2025 Mar 4;20(3):e0319007. doi: 10.1371/journal.pone.0319007. eCollection 2025.
2
Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality.通过多感官整合提高学习成果:一项关于虚拟现实中视听训练的功能磁共振成像研究。
Neuroimage. 2024 Jan;285:120483. doi: 10.1016/j.neuroimage.2023.120483. Epub 2023 Dec 2.
3
Spatial shifts of audio-visual interactions by perceptual learning are specific to the trained orientation and eye.通过知觉学习实现的视听交互的空间转移特定于训练的方向和眼睛。
Seeing Perceiving. 2011;24(6):579-94. doi: 10.1163/187847611X603738.
4
Audio-visual multisensory training enhances visual processing of motion stimuli in healthy participants: an electrophysiological study.视听多感官训练增强健康参与者对运动刺激的视觉处理:一项电生理学研究。
Eur J Neurosci. 2016 Nov;44(10):2748-2758. doi: 10.1111/ejn.13221. Epub 2016 Mar 31.
5
Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions.正常和受损视觉条件下空间导航中听觉和视觉线索的整合。
J Vis. 2024 Oct 3;24(11):7. doi: 10.1167/jov.24.11.7.
6
Emergence of β and γ networks following multisensory training.β 和 γ 网络在多感觉训练后的出现。
Neuroimage. 2020 Feb 1;206:116313. doi: 10.1016/j.neuroimage.2019.116313. Epub 2019 Oct 30.
7
Integrating Visual Information into the Auditory Cortex Promotes Sound Discrimination through Choice-Related Multisensory Integration.将视觉信息整合到听觉皮层中可通过与选择相关的多感觉整合来促进声音辨别。
J Neurosci. 2022 Nov 9;42(45):8556-8568. doi: 10.1523/JNEUROSCI.0793-22.2022. Epub 2022 Sep 23.
8
Selective integration of auditory-visual looming cues by humans.人类对视听逼近线索的选择性整合。
Neuropsychologia. 2009 Mar;47(4):1045-52. doi: 10.1016/j.neuropsychologia.2008.11.003. Epub 2008 Nov 12.
9
Crossmodal interactions and multisensory integration in the perception of audio-visual motion -- a free-field study.视听运动知觉中的跨模态相互作用和多感觉整合——自由场研究。
Brain Res. 2012 Jul 23;1466:99-111. doi: 10.1016/j.brainres.2012.05.015. Epub 2012 May 14.
10
Sound facilitates visual learning.声音有助于视觉学习。
Curr Biol. 2006 Jul 25;16(14):1422-7. doi: 10.1016/j.cub.2006.05.048.

本文引用的文献

1
Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality.通过多感官整合提高学习成果:一项关于虚拟现实中视听训练的功能磁共振成像研究。
Neuroimage. 2024 Jan;285:120483. doi: 10.1016/j.neuroimage.2023.120483. Epub 2023 Dec 2.
2
Current directions in visual perceptual learning.视觉感知学习的当前发展方向。
Nat Rev Psychol. 2022 Nov;1(11):654-668. doi: 10.1038/s44159-022-00107-2. Epub 2022 Sep 27.
3
Variability in training unlocks generalization in visual perceptual learning through invariant representations.
训练中的变异性通过不变表征实现视觉感知学习中的泛化。
Curr Biol. 2023 Mar 13;33(5):817-826.e3. doi: 10.1016/j.cub.2023.01.011. Epub 2023 Jan 31.
4
Benefits of Endogenous Spatial Attention During Visual Double-Training in Cortically-Blinded Fields.皮质盲视野视觉双重训练期间内源性空间注意的益处。
Front Neurosci. 2022 Apr 14;16:771623. doi: 10.3389/fnins.2022.771623. eCollection 2022.
5
Rehabilitation of visual perception in cortical blindness.皮质盲的视觉感知康复。
Handb Clin Neurol. 2022;184:357-373. doi: 10.1016/B978-0-12-819410-2.00030-8.
6
Feature-based attention enables robust, long-lasting location transfer in human perceptual learning.基于特征的注意力可实现人类感知学习中稳健、持久的位置迁移。
Sci Rep. 2021 Jul 6;11(1):13914. doi: 10.1038/s41598-021-93016-y.
7
Perceptual Inference, Learning, and Attention in a Multisensory World.感知推断、学习和多感官世界中的注意
Annu Rev Neurosci. 2021 Jul 8;44:449-473. doi: 10.1146/annurev-neuro-100120-085519. Epub 2021 Apr 21.
8
Rehabilitation of cortically induced visual field loss.皮质诱导的视野缺失的康复。
Curr Opin Neurol. 2021 Feb 1;34(1):67-74. doi: 10.1097/WCO.0000000000000884.
9
Role of endogenous and exogenous attention in task-relevant visual perceptual learning.内源性和外源性注意在任务相关视觉感知学习中的作用。
PLoS One. 2020 Aug 28;15(8):e0237912. doi: 10.1371/journal.pone.0237912. eCollection 2020.
10
Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus.视听空间对准可改善存在竞争视听刺激时的整合。
Neuropsychologia. 2020 Sep;146:107530. doi: 10.1016/j.neuropsychologia.2020.107530. Epub 2020 Jun 20.