• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

EmoCo:演示视频中情感连贯性的视觉分析

EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos.

作者信息

Zeng Haipeng, Wang Xingbo, Wu Aoyu, Wang Yong, Li Quan, Endert Alex, Qu Huamin

出版信息

IEEE Trans Vis Comput Graph. 2020 Jan;26(1):927-937. doi: 10.1109/TVCG.2019.2934656. Epub 2019 Aug 20.

DOI:10.1109/TVCG.2019.2934656
PMID:31443002
Abstract

Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.

摘要

情感在人际交流和公众演讲中起着关键作用。人类情感通常通过多种方式表达。因此,探索多模态情感及其连贯性对于理解演讲中的情感表达和提高演讲技巧具有重要价值。然而,手动观看和研究演讲视频往往既乏味又耗时。缺乏工具支持来帮助进行高效且深入的多层次分析。因此,在本文中,我们介绍了EmoCo,这是一个交互式视觉分析系统,用于促进对演讲视频中面部、文本和音频模态的情感连贯性进行高效分析。我们的可视化系统具有通道连贯性视图和句子聚类视图,两者共同使用能让用户快速了解情感连贯性及其时间演变。此外,细节视图和单词视图分别能从句子层面和单词层面进行详细探索和比较。我们通过基于TED演讲视频的两个使用场景以及与两位领域专家的访谈,对所提出的系统和可视化技术进行了全面评估。结果证明了我们的系统在洞察演讲中情感连贯性方面的有效性。

相似文献

1
EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos.EmoCo:演示视频中情感连贯性的视觉分析
IEEE Trans Vis Comput Graph. 2020 Jan;26(1):927-937. doi: 10.1109/TVCG.2019.2934656. Epub 2019 Aug 20.
2
EmotionCues: Emotion-Oriented Visual Summarization of Classroom Videos.情感线索:面向课堂视频的情感导向视觉摘要
IEEE Trans Vis Comput Graph. 2021 Jul;27(7):3168-3181. doi: 10.1109/TVCG.2019.2963659. Epub 2021 May 27.
3
GestureLens: Visual Analysis of Gestures in Presentation Videos.GestureLens:演讲视频中手势的可视化分析。
IEEE Trans Vis Comput Graph. 2023 Aug;29(8):3685-3697. doi: 10.1109/TVCG.2022.3169175. Epub 2023 Jun 29.
4
Time-Delay Neural Network for Continuous Emotional Dimension Prediction From Facial Expression Sequences.用于从面部表情序列预测连续情感维度的时延神经网络。
IEEE Trans Cybern. 2016 Apr;46(4):916-29. doi: 10.1109/TCYB.2015.2418092. Epub 2015 Apr 21.
5
Talking Face Generation With Audio-Deduced Emotional Landmarks.基于音频提取的情感地标进行人脸对话生成。
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14099-14111. doi: 10.1109/TNNLS.2023.3274676. Epub 2024 Oct 7.
6
Interactive exploration of surveillance video through action shot summarization and trajectory visualization.通过动作镜头摘要和轨迹可视化进行监控视频的交互式探索。
IEEE Trans Vis Comput Graph. 2013 Dec;19(12):2119-28. doi: 10.1109/TVCG.2013.168.
7
CDGT: Constructing diverse graph transformers for emotion recognition from facial videos.构建用于面部视频情感识别的多样化图变换模型。
Neural Netw. 2024 Nov;179:106573. doi: 10.1016/j.neunet.2024.106573. Epub 2024 Jul 25.
8
Emotion recognition from single-channel EEG signals using a two-stage correlation and instantaneous frequency-based filtering method.基于两级相关和基于瞬时频率的滤波方法从单通道 EEG 信号中进行情绪识别。
Comput Methods Programs Biomed. 2019 May;173:157-165. doi: 10.1016/j.cmpb.2019.03.015. Epub 2019 Mar 22.
9
MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.运动流:人体运动跟踪数据中序列模式的视觉抽象与聚合
IEEE Trans Vis Comput Graph. 2016 Jan;22(1):21-30. doi: 10.1109/TVCG.2015.2468292.
10
Emotion schemas are embedded in the human visual system.情绪图式嵌入在人类视觉系统中。
Sci Adv. 2019 Jul 24;5(7):eaaw4358. doi: 10.1126/sciadv.aaw4358. eCollection 2019 Jul.

引用本文的文献

1
Deep-Learning-Based Multimodal Emotion Classification for Music Videos.基于深度学习的音乐视频多模态情感分类。
Sensors (Basel). 2021 Jul 20;21(14):4927. doi: 10.3390/s21144927.