Suppr超能文献

时机大研究:比较一系列实验生成器,包括基于实验室的和在线的。

The timing mega-study: comparing a range of experiment generators, both lab-based and online.

作者信息

Bridges David, Pitiot Alain, MacAskill Michael R, Peirce Jonathan W

机构信息

School of Psychology, University of Nottingham, Nottingham, UK.

Laboratory of Image and Data Analysis, Ilixa Ltd., London, UK.

出版信息

PeerJ. 2020 Jul 20;8:e9414. doi: 10.7717/peerj.9414. eCollection 2020.

Abstract

Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, macOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the , Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across , the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that macOS was the worst, at least for visual stimuli, for all packages. did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on several browser/operating system combinations. For response times (measured using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between OS/browser combinations, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result. The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.

摘要

行为科学领域的许多研究人员依赖于能够呈现刺激并以亚毫秒精度记录反应时间的研究软件。有大量软件包可用于进行这些行为实验并测量参与者的反应时间和表现。然而,关于它们在实际应用中的计时性能的信息却非常少。在此,我们报告一项广泛的研究,该研究使用黑盒工具包来观察视觉和听觉刺激计时以及反应时间的精度和准确性。我们比较了一系列流行的软件包:PsychoPy、E-Prime®、NBS Presentation®、心理物理学工具箱、OpenSesame、Expyriment、Gorilla、jsPsych、Lab.js 和 Testable。在可能的情况下,这些软件包在 Windows、macOS 和 Ubuntu 上进行了测试,并且在一系列浏览器中进行了在线研究测试,以试图找出性能方面的常见模式。在这些软件包中,心理物理学工具箱、PsychoPy、Presentation 和 E-Prime 的计时效果最佳,在视觉、音频和反应测量方面的平均精度均低于 1 毫秒。OpenSesame 在整体上精度略低,但在音频刺激方面尤为明显,而 Expyriment 的精度相当差。在所有软件包中,普遍的模式是 Ubuntu 下的精度通常比 Windows 略好,而 macOS 是最差的,至少在视觉刺激方面是这样。在线系统无法提供与基于实验室的系统相同水平的精度,所有测量中的变异性略大。话虽如此,PsychoPy 和 Gorilla 大致是表现最佳的,在几种浏览器/操作系统组合上非常接近毫秒级精度。对于反应时间(使用高性能按钮盒测量),大多数软件包在所有浏览器中至少能达到 10 毫秒以下的精度,PsychoPy 在所有浏览器中都能达到 3.5 毫秒以下的精度。操作系统/浏览器组合之间存在相当大的变异性,特别是在视听同步方面,这是基于浏览器的实验中最不精确的方面。尽管如此,数据表明在线方法适用于广泛的研究,但要充分考虑导致变异性的来源。来自超过 110,OOO 次试验的结果凸显了即使在这些专门用于该任务的软件包中也可能出现的广泛计时质量范围。我们强调科学家针对自己的刺激和计算机配置进行自己的计时验证测量的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b47/7512138/cd584f293da6/peerj-08-9414-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验