• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用学习分析技术实现群组语音数据的自动协作分析。

Towards Automatic Collaboration Analytics for Group Speech Data Using Learning Analytics.

机构信息

Educational Science Faculty, Open University of the Netherlands, 6419 AT Heerlen, The Netherlands.

Institute of Education Science, Ruhr-Universität Bochum, 44801 Bochum, Germany.

出版信息

Sensors (Basel). 2021 May 2;21(9):3156. doi: 10.3390/s21093156.

DOI:10.3390/s21093156
PMID:34063180
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8124177/
Abstract

Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on "how group members talk" (i.e., spectral, temporal features of audio like pitch) and not "what they talk". The "what" of the conversations is more overt contrary to the "how" of the conversations. Very few studies studied "what" group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics.

摘要

协作是 21 世纪的一项重要技能。随着传感器技术的出现,同地(或面对面)协作(CC)分析得到了发展。这些工作大多使用音频模态来检测 CC 的质量。CC 质量可以通过简单的协作指标来检测,例如总发言时间,也可以通过复杂的指标来检测,例如平均音高的上升和下降的同步性。过去的大多数研究都集中在“团队成员如何交谈”(即音频的频谱、时域特征,如音高)上,而不是“他们在谈论什么”上。对话的“内容”比对话的“方式”更明显。很少有研究探讨团队成员谈论的内容,这些研究是基于实验室的,展示了特定词作为话题群的代表性概述,而不是通过理解这些词之间的联系来分析对话内容的丰富性。为了克服这一问题,我们在这项基于实地试验的技术论文中迈出了第一步,旨在开发一种工具,以迈向自动协作分析。我们设计了一种技术设置,以自动收集、处理和可视化音频数据。数据收集是在大学工作人员玩棋盘游戏时进行的,他们被预先分配了角色,以提高对学习分析和学习设计之间联系的认识。我们不仅对对话进行了词级分析,还通过交互式可视化这些词和短语之间的联系强度来分析这些对话的丰富性。在这种可视化中,我们使用网络图来可视化不同角色之间的轮次交换,以及词级和短语级分析。我们还使用中心性度量来进一步理解网络图,了解词在词网中的影响力以及某些词的影响力。最后,我们发现这种方法在说话人角色分配(即谁在何时发言)和文本数据预处理方面存在一定的自动化局限性。因此,我们得出结论,尽管技术设置在自动化方面存在一定的局限性,但这是理解不同角色之间对话丰富性的一种方法,也是迈向自动协作分析的重要一步。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/3c358f254ba8/sensors-21-03156-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/af11d860d4e6/sensors-21-03156-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/a5f8fc44c37e/sensors-21-03156-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/07a4c67de37a/sensors-21-03156-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/7fa1fa4f1050/sensors-21-03156-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/9e24280b1220/sensors-21-03156-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/794cbead78a3/sensors-21-03156-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/c1e474288489/sensors-21-03156-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/296e886a0828/sensors-21-03156-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/a26b72fd714f/sensors-21-03156-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/460ea582819f/sensors-21-03156-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/2ebb218881fa/sensors-21-03156-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/c975d74f5a2c/sensors-21-03156-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/3c358f254ba8/sensors-21-03156-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/af11d860d4e6/sensors-21-03156-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/a5f8fc44c37e/sensors-21-03156-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/07a4c67de37a/sensors-21-03156-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/7fa1fa4f1050/sensors-21-03156-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/9e24280b1220/sensors-21-03156-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/794cbead78a3/sensors-21-03156-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/c1e474288489/sensors-21-03156-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/296e886a0828/sensors-21-03156-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/a26b72fd714f/sensors-21-03156-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/460ea582819f/sensors-21-03156-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/2ebb218881fa/sensors-21-03156-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/c975d74f5a2c/sensors-21-03156-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc0b/8124177/3c358f254ba8/sensors-21-03156-g013.jpg

相似文献

1
Towards Automatic Collaboration Analytics for Group Speech Data Using Learning Analytics.利用学习分析技术实现群组语音数据的自动协作分析。
Sensors (Basel). 2021 May 2;21(9):3156. doi: 10.3390/s21093156.
2
Multimodal Speaker Diarization Using a Pre-Trained Audio-Visual Synchronization Model.基于预训练的视听同步模型的多模态说话人分割。
Sensors (Basel). 2019 Nov 25;19(23):5163. doi: 10.3390/s19235163.
3
Child-adult speech diarization in naturalistic conditions of preschool classrooms using room-independent ResNet model and automatic speech recognition-based re-segmentation.使用独立于房间的 ResNet 模型和基于自动语音识别的重新分段技术,在学前教室的自然条件下对儿童-成人语音进行区分。
J Acoust Soc Am. 2024 Feb 1;155(2):1198-1215. doi: 10.1121/10.0024353.
4
Envisioning Insight-Driven Learning Based on Thick Data Analytics With Focus on Healthcare.基于厚数据分析的洞察驱动型学习设想,重点关注医疗保健领域。
IEEE Access. 2020 Jun 1;8:114998-115004. doi: 10.1109/ACCESS.2020.2995763. eCollection 2020.
5
Towards an understanding of speech and song perception.迈向对语音和歌曲感知的理解。
Logoped Phoniatr Vocol. 2005;30(3-4):129-35. doi: 10.1080/14015430500262160.
6
Perception of co-speech gestures in aphasic patients: a visual exploration study during the observation of dyadic conversations.失语症患者对伴随言语手势的感知:二元对话观察期间的视觉探索性研究
Cortex. 2015 Mar;64:157-68. doi: 10.1016/j.cortex.2014.10.013. Epub 2014 Nov 4.
7
Low-frequency neural activity reflects rule-based chunking during speech listening.低频神经活动反映了言语听知觉中基于规则的组块化。
Elife. 2020 Apr 20;9:e55613. doi: 10.7554/eLife.55613.
8
Automatic speaker diarization for natural conversation analysis in autism clinical trials.自闭症临床试验中自然会话分析的自动说话人标注
Sci Rep. 2023 Jun 24;13(1):10270. doi: 10.1038/s41598-023-36701-4.
9
The effectiveness of internet-based e-learning on clinician behavior and patient outcomes: a systematic review protocol.基于互联网的电子学习对临床医生行为和患者结局的有效性:一项系统评价方案。
JBI Database System Rev Implement Rep. 2015 Jan;13(1):52-64. doi: 10.11124/jbisrir-2015-1919.
10
Amplitude (vu and rms) and Temporal (msec) Measures of Two Northwestern University Auditory Test No. 6 Recordings.西北大学听觉测试6号的两份录音的幅度(音量单位和均方根)及时间(毫秒)测量值
J Am Acad Audiol. 2015 Apr;26(4):346-54. doi: 10.3766/jaaa.26.4.3.

引用本文的文献

1
CUSCO: An Unobtrusive Custom Secure Audio-Visual Recording System for Ambient Assisted Living.库斯科:一种用于环境辅助生活的不引人注意的定制安全视听记录系统。
Sensors (Basel). 2024 Feb 26;24(5):1506. doi: 10.3390/s24051506.
2
From Sensor Data to Educational Insights.从传感器数据到教育洞察。
Sensors (Basel). 2022 Nov 7;22(21):8556. doi: 10.3390/s22218556.
3
On the Use of Large Interactive Displays to Support Collaborative Engagement and Visual Exploratory Tasks.利用大型交互显示屏支持协作参与和可视化探索任务。
Sensors (Basel). 2021 Dec 16;21(24):8403. doi: 10.3390/s21248403.