• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

结合心血管和瞳孔特征,使用 k-最近邻分类器评估在倾听过程中的任务需求、社会环境和句子准确性。

Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening.

机构信息

Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands.

Eriksholm Research Centre, Snekkersten, Denmark.

出版信息

Trends Hear. 2024 Jan-Dec;28:23312165241232551. doi: 10.1177/23312165241232551.

DOI:10.1177/23312165241232551
PMID:38549351
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10981225/
Abstract

In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean  =  64.6 years, SD  =  9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD  =  10.2) for task demand, 88.0% (SD  =  7.5) for social context, and 60.0% (SD  =  13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.

摘要

在日常生活中,声学因素和社会环境都会影响听力努力的投入。在实验室环境中,已经独立地从瞳孔和心血管反应中推断出有关听力努力的信息。这些措施可以共同预测与听力相关的因素的程度尚不清楚。在这里,我们将瞳孔和心血管特征相结合,以预测言语感知的声学和语境方面。数据是从 29 名听力损失成年人(平均年龄 64.6 岁,标准差 9.2)中收集的。参与者在两个个性化信噪比(分别对应于句子正确的 50%和 80%)和两个社会环境(存在和不存在两个观察者)中进行了言语感知任务。每个试验提取了七个特征:基础瞳孔大小、瞳孔峰值扩张、平均瞳孔扩张、心动间隔、血容量脉搏幅度、射血前期和脉搏到达时间。这些特征用于训练 K 最近邻分类器,以预测任务需求、社会环境和句子准确性。对组水平数据的 K 折交叉验证显示了高于平均的分类准确性:任务需求,64.4%;社会环境,78.3%;句子准确性,55.1%。然而,当分类器在不同参与者的数据上进行训练和测试时,分类准确性会降低。个体训练的分类器(每个参与者一个)的性能优于组水平分类器:任务需求为 71.7%(标准差 10.2%),社会环境为 88.0%(标准差 7.5%),句子准确性为 60.0%(标准差 13.1%)。我们证明,使用基于群体生理数据训练的分类器来预测言语感知方面的性能,对新参与者的泛化能力较差。个体校准的分类器在未来的应用中更有前途。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/7b53535f4126/10.1177_23312165241232551-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/0afa9e75e21a/10.1177_23312165241232551-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/dc2a65b86f6d/10.1177_23312165241232551-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/b5318098a244/10.1177_23312165241232551-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/248dfffec68d/10.1177_23312165241232551-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/892ca2b7ecd8/10.1177_23312165241232551-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/b02020b510cf/10.1177_23312165241232551-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/4db383f424b5/10.1177_23312165241232551-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/7b53535f4126/10.1177_23312165241232551-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/0afa9e75e21a/10.1177_23312165241232551-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/dc2a65b86f6d/10.1177_23312165241232551-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/b5318098a244/10.1177_23312165241232551-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/248dfffec68d/10.1177_23312165241232551-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/892ca2b7ecd8/10.1177_23312165241232551-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/b02020b510cf/10.1177_23312165241232551-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/4db383f424b5/10.1177_23312165241232551-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c052/10981225/7b53535f4126/10.1177_23312165241232551-fig8.jpg

相似文献

1
Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening.结合心血管和瞳孔特征,使用 k-最近邻分类器评估在倾听过程中的任务需求、社会环境和句子准确性。
Trends Hear. 2024 Jan-Dec;28:23312165241232551. doi: 10.1177/23312165241232551.
2
Using Pupillometry in Virtual Reality as a Tool for Speech-in-Noise Research.在虚拟现实中使用瞳孔测量法作为噪声环境下语音研究的工具。
Ear Hear. 2025 Jul 2. doi: 10.1097/AUD.0000000000001692.
3
Listening Effort and Memory Effort in Cochlear Implant Users: A Pupillometry Study.人工耳蜗使用者的听觉努力和记忆努力:一项瞳孔测量研究。
Ear Hear. 2025 Jul 30. doi: 10.1097/AUD.0000000000001698.
4
On the Feasibility of Using Behavioral Listening Effort Test Methods to Evaluate Auditory Performance in Cochlear Implant Users.关于使用行为性听力努力测试方法评估人工耳蜗使用者听觉表现的可行性
Trends Hear. 2024 Jan-Dec;28:23312165241240572. doi: 10.1177/23312165241240572.
5
Seeing a Talker's Mouth Reduces the Effort of Perceiving Speech and Repairing Perceptual Mistakes for Listeners With Cochlear Implants.看到说话者的嘴部动作可减轻人工耳蜗佩戴者感知语音和纠正感知错误的难度。
Ear Hear. 2025 Jun 16. doi: 10.1097/AUD.0000000000001683.
6
Advance Contextual Clues Alleviate Listening Effort During Sentence Repair in Listeners With Hearing Aids.提前语境线索减轻了佩戴助听器的听众在句子修复过程中的听力负担。
J Speech Lang Hear Res. 2025 Apr 8;68(4):2144-2156. doi: 10.1044/2025_JSLHR-24-00184. Epub 2025 Mar 28.
7
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.两种现代生存预测工具 SORG-MLA 和 METSSS 在接受手术联合放疗和单纯放疗治疗有症状长骨转移患者中的比较。
Clin Orthop Relat Res. 2024 Dec 1;482(12):2193-2208. doi: 10.1097/CORR.0000000000003185. Epub 2024 Jul 23.
8
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.利用基础模型库进行跨设备肿瘤显微镜检查中的细胞相似性搜索。
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.
9
Attention Mobilization as a Modulator of Listening Effort: Evidence From Pupillometry.注意动员作为倾听努力的调节剂:瞳孔测量的证据。
Trends Hear. 2024 Jan-Dec;28:23312165241245240. doi: 10.1177/23312165241245240.
10
Speech mode classification from electrocorticography: transfer between electrodes and participants.基于皮层脑电图的语音模式分类:电极间及参与者间的转移
J Neural Eng. 2025 Jul 31;22(4). doi: 10.1088/1741-2552/adf2de.

本文引用的文献

1
Photoplethysmography upon cold stress-impact of measurement site and acquisition mode.冷应激下光电容积脉搏波描记法——测量部位和采集模式的影响
Front Physiol. 2023 Jun 1;14:1127624. doi: 10.3389/fphys.2023.1127624. eCollection 2023.
2
Copresence Was Found to Be Related to Some Pupil Measures in Persons With Hearing Loss While They Performed a Speech-in-Noise Task.在听力损失者执行噪声下言语任务时,共现被发现与一些瞳孔测量值有关。
Ear Hear. 2023;44(5):1190-1201. doi: 10.1097/AUD.0000000000001361. Epub 2023 Apr 4.
3
Individualized Modeling to Distinguish Between High and Low Arousal States Using Physiological Data.
使用生理数据进行个体化建模以区分高唤醒状态和低唤醒状态。
J Healthc Inform Res. 2020 Jan 22;4(1):91-109. doi: 10.1007/s41666-019-00064-1. eCollection 2020 Mar.
4
The confounding effects of eye blinking on pupillometry, and their remedy.眨眼对瞳孔测量的混杂影响及其矫正。
PLoS One. 2021 Dec 17;16(12):e0261463. doi: 10.1371/journal.pone.0261463. eCollection 2021.
5
The Assessment of Autonomic Nervous System Activity Based on Photoplethysmography in Healthy Young Men.基于光电容积脉搏波描记法对健康青年男性自主神经系统活动的评估
Front Physiol. 2021 Sep 24;12:733264. doi: 10.3389/fphys.2021.733264. eCollection 2021.
6
Effortful listening: Sympathetic activity varies as a function of listening demand but parasympathetic activity does not.努力倾听:交感神经活动随倾听需求的变化而变化,但副交感神经活动则不会。
Hear Res. 2021 Oct;410:108348. doi: 10.1016/j.heares.2021.108348. Epub 2021 Sep 4.
7
Social observation increases the cardiovascular response of hearing-impaired listeners during a speech reception task.社会观察会增加听力障碍者在言语接受任务中的心血管反应。
Hear Res. 2021 Oct;410:108334. doi: 10.1016/j.heares.2021.108334. Epub 2021 Aug 12.
8
Alzheimer's Disease and Frontotemporal Dementia: A Robust Classification Method of EEG Signals and a Comparison of Validation Methods.阿尔茨海默病与额颞叶痴呆:一种稳健的脑电图信号分类方法及验证方法比较
Diagnostics (Basel). 2021 Aug 9;11(8):1437. doi: 10.3390/diagnostics11081437.
9
A Robust Machine Learning Based Framework for the Automated Detection of ADHD Using Pupillometric Biomarkers and Time Series Analysis.基于机器学习的鲁棒框架,用于使用瞳孔计生物标志物和时间序列分析自动检测 ADHD。
Sci Rep. 2021 Aug 12;11(1):16370. doi: 10.1038/s41598-021-95673-5.
10
Understanding Speech Amid the Jingle and Jangle: Recommendations for Improving Measurement Practices in Listening Effort Research.在叮当声和嘈杂声中理解言语:改善听力努力研究中测量方法的建议。
Audit Percept Cogn. 2020;3(4):169-188. doi: 10.1080/25742442.2021.1903293. Epub 2021 Mar 23.