Saxon Michael, Tripathi Ayush, Jiao Yishan, Liss Julie, Berisha Visar
Arizona State Univ., Sch. of Elect., Comput., & Energy Eng., Tempe, Arizona, USA.
IEEE/ACM Trans Audio Speech Lang Process. 2020;28:2511-2522. doi: 10.1109/taslp.2020.3015035. Epub 2020 Aug 7.
Hypernasality is a common characteristic symptom across many motor-speech disorders. For voiced sounds, hypernasality introduces an additional resonance in the lower frequencies and, for unvoiced sounds, there is reduced articulatory precision due to air escaping through the nasal cavity. However, the acoustic manifestation of these symptoms is highly variable, making hypernasality estimation very challenging, both for human specialists and automated systems. Previous work in this area relies on either engineered features based on statistical signal processing or machine learning models trained on clinical ratings. Engineered features often fail to capture the complex acoustic patterns associated with hypernasality, whereas metrics based on machine learning are prone to overfitting to the small disease-specific speech datasets on which they are trained. Here we propose a new set of acoustic features that capture these complementary dimensions. The features are based on two acoustic models trained on a large corpus of healthy speech. The first acoustic model aims to measure nasal resonance from voiced sounds, whereas the second acoustic model aims to measure articulatory imprecision from unvoiced sounds. To demonstrate that the features derived from these acoustic models are specific to hypernasal speech, we evaluate them across different dysarthria corpora. Our results show that the features generalize even when training on hypernasal speech from one disease and evaluating on hypernasal speech from another disease (e.g., training on Parkinson's disease, evaluation on Huntington's disease), and when training on neurologically disordered speech but evaluating on cleft palate speech.
鼻音过重是许多运动性言语障碍中常见的特征性症状。对于浊音,鼻音过重会在低频中引入额外的共振,而对于清音,由于空气通过鼻腔逸出,发音精度会降低。然而,这些症状的声学表现差异很大,这使得无论是对于人类专家还是自动化系统而言,鼻音过重的评估都极具挑战性。该领域以前的工作要么依赖基于统计信号处理的工程特征,要么依赖于根据临床评分训练的机器学习模型。工程特征往往无法捕捉与鼻音过重相关的复杂声学模式,而基于机器学习的指标则容易过度拟合它们所训练的特定疾病的小语音数据集。在此,我们提出了一组新的声学特征,以捕捉这些互补维度。这些特征基于在大量健康语音语料库上训练的两个声学模型。第一个声学模型旨在从浊音中测量鼻腔共振,而第二个声学模型旨在从清音中测量发音不精确性。为了证明从这些声学模型中导出的特征特定于鼻音过重的语音,我们在不同的构音障碍语料库上对它们进行了评估。我们的结果表明,即使在一种疾病的鼻音过重语音上进行训练,而在另一种疾病的鼻音过重语音上进行评估(例如,在帕金森病上训练,在亨廷顿舞蹈症上评估),以及在神经疾病语音上训练但在腭裂语音上评估时,这些特征也具有通用性。