Suppr超能文献

人工耳蜗数据记录能准确描绘儿童的“听觉场景”。

Cochlear implant datalogging accurately characterizes children's 'auditory scenes'.

作者信息

Ganek Hillary, Forde-Dixon Deja, Cushing Sharon L, Papsin Blake C, Gordon Karen A

机构信息

Archie's Cochlear Implant Laboratory, Hospital for Sick Children, Toronto, Ontario, Canada.

Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada.

出版信息

Cochlear Implants Int. 2021 Mar;22(2):85-95. doi: 10.1080/14670100.2020.1826137. Epub 2020 Oct 2.

Abstract

This study sought to determine if children's auditory environments are accurately captured by the automatic scene classification embedded in cochlear implant (CI) processors and to quantify the amount of electronic device use in these environments. Seven children with CIs, 36.71 ( = 11.94) months old, participated in this study. Three of the children were male and four were female. Eleven datalogs, containing outcomes from Cochlear's™ Nucleus 6 (Cochlear Corporation, Australia) CI scene classification algorithm, and seven day-long audio recordings collected with a Language ENvironment Analysis (LENA; LENA Research Foundation, USA) recorder were obtained for analysis. Results from the scene classification algorithm were strongly correlated with categories determined through human coding (= .86,  = [-0.2, 1], (5,5.1) = 5.9, = 0.04) but some differences emerged. Scene classification identified more 'Quiet' ((8.2) = 4.1, = 0.003) than human coders, while humans identified more 'Speech' ((10.6)= -2.4, = 0.04). On average, 8% (SD= 5.8) of the children's day was spent in electronic sound, which was primarily produced by mobile devices (39.7%). : While CI scene classification software reflects children's natural auditory environments, it is important to consider how different scenes are defined when interpreting results. An electronic sounds category should be considered given how often children are exposed to such sounds.

摘要

本研究旨在确定儿童的听觉环境是否能被人工耳蜗(CI)处理器中嵌入的自动场景分类准确捕捉,并量化这些环境中电子设备的使用量。七名佩戴人工耳蜗的儿童参与了本研究,他们的年龄为36.71(标准差 = 11.94)个月。其中三名儿童为男性,四名儿童为女性。研究获取了11份数据日志,包含科利耳公司(Cochlear Corporation,澳大利亚)的Nucleus 6人工耳蜗场景分类算法的结果,以及使用语言环境分析(LENA;LENA研究基金会,美国)记录仪收集的7份为期一天的音频记录用于分析。场景分类算法的结果与通过人工编码确定的类别高度相关(相关系数 = 0.86,95%置信区间 = [-0.2, 1],自由度(5,5.1) = 5.9,P值 = 0.04),但也出现了一些差异。场景分类识别出的“安静”场景比人工编码者更多(Z(8.2) = 4.1,P = 0.003),而人工编码者识别出的“言语”场景更多(Z(10.6) = -2.4,P = 0.04)。儿童一天中平均有8%(标准差 = 5.8%)的时间处于电子声音环境中,这些声音主要由移动设备产生(占39.7%)。虽然人工耳蜗场景分类软件能够反映儿童的自然听觉环境,但在解释结果时,考虑不同场景的定义方式非常重要。鉴于儿童接触此类声音的频率,应考虑设立一个电子声音类别。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验