Suppr超能文献

VESPA:一种快速估算视觉诱发电位的方法。

The VESPA: a method for the rapid estimation of a visual evoked potential.

作者信息

Lalor Edmund C, Pearlmutter Barak A, Reilly Richard B, McDarby Gary, Foxe John J

机构信息

School of Electrical, Electronic and Mechanical Engineering, University College Dublin, Belfield, and Cognitive Neurophysiology Laboratory, St Vincent's Hospital, Fairview, Dublin, Ireland.

出版信息

Neuroimage. 2006 Oct 1;32(4):1549-61. doi: 10.1016/j.neuroimage.2006.05.054. Epub 2006 Jul 27.

Abstract

Faster and less obtrusive means for measuring a Visual Evoked Potential would be valuable in clinical testing and basic neuroscience research. This study presents a method for accomplishing this by smoothly modulating the luminance of a visual stimulus using a stochastic process. Despite its visually unobtrusive nature, the rich statistical structure of the stimulus enables rapid estimation of the visual system's impulse response. The profile of these responses, which we call VESPAs, correlates with standard VEPs, with r=0.91, p<10(-28) for the group average. The time taken to obtain a VESPA with a given signal-to-noise ratio compares favorably to that required to obtain a VEP with a similar level of certainty. Additionally, we show that VESPA responses to two independent stimuli can be obtained simultaneously, which could drastically reduce the time required to collect responses to multiple stimuli. The new method appears to provide a useful alternative to standard VEP methods, and to have potential application both in clinical practice and to the study of sensory and perceptual functions.

摘要

在临床测试和基础神经科学研究中,更快且干扰性更小的视觉诱发电位测量方法将具有重要价值。本研究提出了一种通过使用随机过程平滑调制视觉刺激的亮度来实现这一目标的方法。尽管该刺激在视觉上不具干扰性,但其丰富的统计结构能够快速估计视觉系统的冲动反应。这些反应的特征,我们称之为VESPA,与标准视觉诱发电位相关,组平均值的相关系数r = 0.91,p < 10(-28)。获得具有给定信噪比的VESPA所需的时间与获得具有相似确定水平的视觉诱发电位所需的时间相比更具优势。此外,我们表明可以同时获得对两个独立刺激的VESPA反应,这可以大幅减少收集对多个刺激的反应所需的时间。新方法似乎为标准视觉诱发电位方法提供了一种有用的替代方案,并且在临床实践以及感觉和知觉功能研究中都具有潜在应用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验