Department of Psychology, Graduate School of Education, Hiroshima University, 1-1-1, Kagamiyama, Higashi-Hiroshima, 739-0046, Japan.
Department of Psychiatry and Neurosciences, Graduate School of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Japan.
Behav Res Methods. 2018 Aug;50(4):1415-1429. doi: 10.3758/s13428-018-1027-6.
Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases-such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology.
使用适当的刺激来引发情感对于情感研究尤为重要。心理学家提供了几个标准化的情感刺激数据库,例如国际情感图片系统(IAPS)和 Nencki 情感图片系统(NAPS)作为视觉刺激数据库,以及国际情感数字化声音(IADS)和蒙特利尔情感声音作为听觉刺激数据库,用于情感实验。然而,考虑到现有听觉刺激数据库研究的局限性,与使用视觉刺激的研究相比,使用听觉刺激的研究相对较少。首先,样本声音的数量有限,难以在情感条件和语义类别之间进行平等比较。其次,一些人为创建的材料(音乐或人声)可能无法准确地驱动预期的情感过程。我们的主要目的是扩展现有的听觉情感样本数据库,以充分涵盖自然声音。我们要求 207 名参与者使用自我评估量表(SAM)和三个基本情感评分量表对 935 个声音(包括 IADS-2 中的声音)进行评分。结果表明,声音中的情感可以在情感评分量表上区分,并且声音评估的稳定性表明我们已经成功提供了一个更大的自然、情感唤起的听觉刺激语料库,涵盖了广泛的语义类别。我们扩展后的标准化声音样本数据库可以促进听觉系统的广泛研究以及与其他感觉模态的可能相互作用,鼓励该领域不同研究人员的结果进行直接可靠的比较。