Suppr超能文献

面对面互动中语音驱动的注视

Speech Driven Gaze in a Face-to-Face Interaction.

作者信息

Arslan Aydin Ülkü, Kalkan Sinan, Acartürk Cengiz

机构信息

Cognitive Science Department, Middle East Technical University, Ankara, Turkey.

Computer Engineering Department, Middle East Technical University, Ankara, Turkey.

出版信息

Front Neurorobot. 2021 Mar 4;15:598895. doi: 10.3389/fnbot.2021.598895. eCollection 2021.

Abstract

Gaze and language are major pillars in multimodal communication. Gaze is a non-verbal mechanism that conveys crucial social signals in face-to-face conversation. However, compared to language, gaze has been less studied as a communication modality. The purpose of the present study is 2-fold: (i) to investigate gaze direction (i.e., aversion and face gaze) and its relation to speech in a face-to-face interaction; and (ii) to propose a computational model for multimodal communication, which predicts gaze direction using high-level speech features. Twenty-eight pairs of participants participated in data collection. The experimental setting was a mock job interview. The eye movements were recorded for both participants. The speech data were annotated by ISO 24617-2 Standard for Dialogue Act Annotation, as well as manual tags based on previous social gaze studies. A comparative analysis was conducted by Convolutional Neural Network (CNN) models that employed specific architectures, namely, VGGNet and ResNet. The results showed that the frequency and the duration of gaze differ significantly depending on the role of participant. Moreover, the ResNet models achieve higher than 70% accuracy in predicting gaze direction.

摘要

注视和语言是多模态交流的主要支柱。注视是一种非语言机制,在面对面交谈中传达关键的社交信号。然而,与语言相比,注视作为一种交流方式的研究较少。本研究的目的有两个:(i)在面对面互动中研究注视方向(即回避和面部注视)及其与言语的关系;(ii)提出一种多模态交流的计算模型,该模型使用高级语音特征预测注视方向。28对参与者参与了数据收集。实验场景是模拟求职面试。记录了两位参与者的眼动。语音数据根据ISO 24617-2对话行为标注标准以及基于先前社会注视研究的手动标签进行标注。采用特定架构(即VGGNet和ResNet)的卷积神经网络(CNN)模型进行了对比分析。结果表明,注视的频率和持续时间因参与者的角色不同而有显著差异。此外,ResNet模型在预测注视方向方面的准确率高于70%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eed5/7970197/774eba262db9/fnbot-15-598895-g0001.jpg

相似文献

1
Speech Driven Gaze in a Face-to-Face Interaction.
Front Neurorobot. 2021 Mar 4;15:598895. doi: 10.3389/fnbot.2021.598895. eCollection 2021.
2
MAGiC: A Multimodal Framework for Analysing Gaze in Dyadic Communication.
J Eye Mov Res. 2018 Nov 12;11(6). doi: 10.16910/jemr.11.6.2.
3
Timing of gazes in child dialogues: a time-course analysis of requests and back channelling in referential communication.
Int J Lang Commun Disord. 2012 Jul-Aug;47(4):373-83. doi: 10.1111/j.1460-6984.2012.00151.x. Epub 2012 Mar 5.
4
Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.
Int J Lang Commun Disord. 2011 Sep-Oct;46(5):535-49. doi: 10.1111/j.1460-6984.2011.00005.x. Epub 2011 Mar 7.
5
Gaze aversion to stuttered speech: a pilot study investigating differential visual attention to stuttered and fluent speech.
Int J Lang Commun Disord. 2010 Mar-Apr;45(2):133-44. doi: 10.3109/13682820902763951.
6
How does the topic of conversation affect verbal exchange and eye gaze? A comparison between typical development and high-functioning autism.
Neuropsychologia. 2010 Jul;48(9):2730-9. doi: 10.1016/j.neuropsychologia.2010.05.020. Epub 2010 May 21.
7
Gaze aversion in conversational settings: An investigation based on mock job interview.
J Eye Mov Res. 2021 May 19;14(1). doi: 10.16910/jemr.14.1.1.
8
Multimodal Communication in Aphasia: Perception and Production of Co-speech Gestures During Face-to-Face Conversation.
Front Hum Neurosci. 2018 Jun 14;12:200. doi: 10.3389/fnhum.2018.00200. eCollection 2018.
9
Effects of being watched on eye gaze and facial displays of typical and autistic individuals during conversation.
Autism. 2021 Jan;25(1):210-226. doi: 10.1177/1362361320951691. Epub 2020 Aug 27.
10
Using dual eye tracking to uncover personal gaze patterns during social interaction.
Sci Rep. 2018 Mar 9;8(1):4271. doi: 10.1038/s41598-018-22726-7.

本文引用的文献

1
MAGiC: A Multimodal Framework for Analysing Gaze in Dyadic Communication.
J Eye Mov Res. 2018 Nov 12;11(6). doi: 10.16910/jemr.11.6.2.
2
Eye tracking in Educational Science: Theoretical frameworks and research agendas.
J Eye Mov Res. 2017 Feb 4;10(1). doi: 10.16910/jemr.10.1.3.
3
Deep Neural Networks as Scientific Models.
Trends Cogn Sci. 2019 Apr;23(4):305-317. doi: 10.1016/j.tics.2019.01.009. Epub 2019 Feb 19.
4
Using dual eye tracking to uncover personal gaze patterns during social interaction.
Sci Rep. 2018 Mar 9;8(1):4271. doi: 10.1038/s41598-018-22726-7.
5
Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.
PLoS One. 2015 Aug 26;10(8):e0136905. doi: 10.1371/journal.pone.0136905. eCollection 2015.
6
The origin of human multi-modal communication.
Philos Trans R Soc Lond B Biol Sci. 2014 Sep 19;369(1651):20130302. doi: 10.1098/rstb.2013.0302.
7
The processing of speech, gesture, and action during language comprehension.
Psychon Bull Rev. 2015 Apr;22(2):517-23. doi: 10.3758/s13423-014-0681-7.
8
From gaze cueing to dual eye-tracking: novel approaches to investigate the neural correlates of gaze in social interaction.
Neurosci Biobehav Rev. 2013 Dec;37(10 Pt 2):2516-28. doi: 10.1016/j.neubiorev.2013.07.017. Epub 2013 Aug 5.
9
Standardization of automated analyses of oculomotor fixation and saccadic behaviors.
IEEE Trans Biomed Eng. 2010 Nov;57(11). doi: 10.1109/TBME.2010.2057429. Epub 2010 Jul 26.
10
Eye tracking in infancy research.
Dev Neuropsychol. 2010;35(1):1-19. doi: 10.1080/87565640903325758.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验