• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释的机器学习揭示了听力正常的听众的听力阈值与噪声中言语识别之间的关系。

Explainable machine learning reveals the relationship between hearing thresholds and speech-in-noise recognition in listeners with normal audiograms.

机构信息

Department of Speech, Language and Hearing Sciences, The University of Texas at Austin, Austin, Texas 78712, USA.

School of Mathematical and Statistical Sciences, The University of Texas Rio Grande Valley, Edinburg, Texas 78539, USA.

出版信息

J Acoust Soc Am. 2023 Oct 1;154(4):2278-2288. doi: 10.1121/10.0021303.

DOI:10.1121/10.0021303
PMID:37823779
Abstract

Some individuals complain of listening-in-noise difficulty despite having a normal audiogram. In this study, machine learning is applied to examine the extent to which hearing thresholds can predict speech-in-noise recognition among normal-hearing individuals. The specific goals were to (1) compare the performance of one standard (GAM, generalized additive model) and four machine learning models (ANN, artificial neural network; DNN, deep neural network; RF, random forest; XGBoost; eXtreme gradient boosting), and (2) examine the relative contribution of individual audiometric frequencies and demographic variables in predicting speech-in-noise recognition. Archival data included thresholds (0.25-16 kHz) and speech recognition thresholds (SRTs) from listeners with clinically normal audiograms (n = 764 participants or 1528 ears; age, 4-38 years old). Among the machine learning models, XGBoost performed significantly better than other methods (mean absolute error; MAE = 1.62 dB). ANN and RF yielded similar performances (MAE = 1.68 and 1.67 dB, respectively), whereas, surprisingly, DNN showed relatively poorer performance (MAE = 1.94 dB). The MAE for GAM was 1.61 dB. SHapley Additive exPlanations revealed that age, thresholds at 16 kHz, 12.5 kHz, etc., on the order of importance, contributed to SRT. These results suggest the importance of hearing in the extended high frequencies for predicting speech-in-noise recognition in listeners with normal audiograms.

摘要

一些人尽管听力图正常,但仍会抱怨在噪声环境中听声困难。在这项研究中,我们应用机器学习来检验听力阈值在多大程度上可以预测正常听力个体在噪声环境中的言语识别能力。具体目标是:(1) 比较一种标准模型(广义相加模型,GAM)和四种机器学习模型(人工神经网络,ANN;深度神经网络,DNN;随机森林,RF;极端梯度提升,XGBoost)的性能;(2) 检验个体听力测试频率和人口统计学变量在预测噪声环境中的言语识别能力方面的相对贡献。本研究的档案数据包括听力阈值(0.25-16 kHz)和言语识别阈值(SRT;n = 764 名参与者或 1528 只耳朵;年龄 4-38 岁)。在机器学习模型中,XGBoost 的表现明显优于其他方法(平均绝对误差,MAE = 1.62 dB)。ANN 和 RF 的表现相似(MAE 分别为 1.68 和 1.67 dB),而令人惊讶的是,DNN 的表现相对较差(MAE = 1.94 dB)。GAM 的 MAE 为 1.61 dB。SHapley Additive exPlanations 揭示了年龄、16 kHz、12.5 kHz 等频率的阈值,按重要性顺序,对 SRT 有贡献。这些结果表明,在听力图正常的个体中,扩展高频听力对预测噪声环境中的言语识别能力非常重要。

相似文献

1
Explainable machine learning reveals the relationship between hearing thresholds and speech-in-noise recognition in listeners with normal audiograms.可解释的机器学习揭示了听力正常的听众的听力阈值与噪声中言语识别之间的关系。
J Acoust Soc Am. 2023 Oct 1;154(4):2278-2288. doi: 10.1121/10.0021303.
2
Extended High-frequency Hearing Impairment Despite a Normal Audiogram: Relation to Early Aging, Speech-in-noise Perception, Cochlear Function, and Routine Earphone Use.尽管听力图正常,但高频听力仍受损:与早期衰老、噪声下言语感知、耳蜗功能和常规耳机使用的关系。
Ear Hear. 2022 May/Jun;43(3):822-835. doi: 10.1097/AUD.0000000000001140.
3
Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms.使用机器学习对不同听众群体的助听器效果进行预测:双耳降噪算法的言语识别性能。
Trends Hear. 2018 Jan-Dec;22:2331216518768954. doi: 10.1177/2331216518768954.
4
Speech-in-speech listening on the LiSN-S test by older adults with good audiograms depends on cognition and hearing acuity at high frequencies.听力图正常的老年人在LiSN-S测试中的言语中言语聆听能力取决于高频认知和听力敏锐度。
Ear Hear. 2015 Jan;36(1):24-41. doi: 10.1097/AUD.0000000000000096.
5
Dynamically Masked Audiograms With Machine Learning Audiometry.采用机器学习听力测定法的动态掩蔽听力图
Ear Hear. 2020 Nov/Dec;41(6):1692-1702. doi: 10.1097/AUD.0000000000000891.
6
The relationship between high-frequency pure-tone hearing loss, hearing in noise test (HINT) thresholds, and the articulation index.高频纯音听力损失、噪声环境下听力测试(HINT)阈值与清晰度指数之间的关系。
J Am Acad Audiol. 2012 Nov-Dec;23(10):779-88. doi: 10.3766/jaaa.23.10.4.
7
Auditory models of suprathreshold distortion and speech intelligibility in persons with impaired hearing.听力受损者的超阈值失真与言语可懂度的听觉模型。
J Am Acad Audiol. 2013 Apr;24(4):307-28. doi: 10.3766/jaaa.24.4.6.
8
How much individualization is required to predict the individual effect of suprathreshold processing deficits? Assessing Plomp's distortion component with psychoacoustic detection thresholds and FADE.需要进行多少个性化处理来预测阈上处理缺陷的个体效应?使用心理声学检测阈值和 FADE 评估 Plomp 的失真分量。
Hear Res. 2022 Dec;426:108609. doi: 10.1016/j.heares.2022.108609. Epub 2022 Sep 20.
9
Hearing Impairment in the Extended High Frequencies in Children Despite Clinically Normal Hearing.尽管临床听力正常,但儿童扩展高频听力受损。
Ear Hear. 2022;43(6):1653-1660. doi: 10.1097/AUD.0000000000001225. Epub 2022 Apr 25.
10
A model of speech recognition for hearing-impaired listeners based on deep learning.基于深度学习的听障人士语音识别模型。
J Acoust Soc Am. 2022 Mar;151(3):1417. doi: 10.1121/10.0009411.

引用本文的文献

1
A systematic review of machine learning approaches in cochlear implant outcomes.人工耳蜗植入效果中机器学习方法的系统综述。
NPJ Digit Med. 2025 Jul 5;8(1):411. doi: 10.1038/s41746-025-01733-9.
2
Artificial Intelligence in Audiology: A Scoping Review of Current Applications and Future Directions.人工智能在听力学中的应用:现状与未来方向的范围综述。
Sensors (Basel). 2024 Nov 6;24(22):7126. doi: 10.3390/s24227126.
3
Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals.职业性噪声暴露人群中噪声环境下言语理解能力的预测因素
Biology (Basel). 2024 Jun 5;13(6):416. doi: 10.3390/biology13060416.