Suppr超能文献

触觉和视觉信号组合的跨模态一致性优势。

Cross-modal congruency benefits for combined tactile and visual signaling.

作者信息

Merlo James L, Duley Aaron R, Hancock Peter A

机构信息

United States Military Academy, West Point, NY 10996, USA.

出版信息

Am J Psychol. 2010 Winter;123(4):413-24. doi: 10.5406/amerjpsyc.123.4.0413.

Abstract

This series of experiments tested the assimilation and efficacy of tactile messages that were created based on five common military arm and hand signals. We compared the response times and accuracy rates for these tactile representations against responses to equivalent visual representations of the same messages. Experimentally, such messages were displayed in either tactile or visual forms alone, or using both modalities in combination. There was a performance benefit for concurrent message presentations, which showed superior response times and improved accuracy rates when compared with individual presentations in either modality alone. Such improvement was due largely to a reduction in premotor response time. These improvements occurred equally in military and nonmilitary samples. Potential reasons for this multimodal facilitation are discussed. On a practical level, these results confirm the utility of tactile messaging to augment visual messaging, especially in challenging and stressful environments where visual messaging is not feasible or effective.

摘要

这一系列实验测试了基于五种常见军事手臂和手部信号创建的触觉信息的同化效果和功效。我们将这些触觉表征的反应时间和准确率与相同信息的等效视觉表征的反应进行了比较。在实验中,此类信息单独以触觉或视觉形式显示,或以两种方式组合显示。同时呈现信息具有性能优势,与单独以任何一种方式呈现相比,其反应时间更短,准确率更高。这种提高主要归因于运动前反应时间的减少。军事和非军事样本中均出现了这种改善。本文讨论了这种多模态促进作用的潜在原因。在实际层面上,这些结果证实了触觉信息在增强视觉信息方面的效用,特别是在视觉信息不可行或无效的具有挑战性和压力的环境中。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验