Francart Tom, Moonen Marc, Wouters Jan
ExpORL, Department of Neurosciences, Katholieke Universiteit Leuven, Leuven, Belgium.
Int J Audiol. 2009 Feb;48(2):80-90. doi: 10.1080/14992020802400662.
Speech reception tests are commonly administered by manually scoring the oral response of the subject. This requires a test supervisor to be continuously present. To avoid this, a subject can type the response, after which it can be scored automatically. However, spelling errors may then be counted as recognition errors, influencing the test results. We demonstrate an autocorrection approach based on two scoring algorithms to cope with spelling errors. The first algorithm deals with sentences and is based on word scores. The second algorithm deals with single words and is based on phoneme scores. Both algorithms were evaluated with a corpus of typed answers based on three different Dutch speech materials. The percentage of differences between automatic and manual scoring was determined, in addition to the mean difference in speech recognition threshold. The sentence correction algorithm performed at a higher accuracy than commonly obtained with these speech materials. The word correction algorithm performed better than the human operator. Both algorithms can be used in practice and allow speech reception tests with open set speech materials over the internet.
言语接受测试通常通过人工对受试者的口头回答进行评分来实施。这需要测试主管持续在场。为避免这种情况,受试者可以输入回答,之后可自动进行评分。然而,拼写错误可能会被计为识别错误,从而影响测试结果。我们展示了一种基于两种评分算法的自动纠错方法来处理拼写错误。第一种算法处理句子,基于单词得分。第二种算法处理单个单词,基于音素得分。两种算法都使用基于三种不同荷兰语语音材料的输入答案语料库进行了评估。除了言语识别阈值的平均差异外,还确定了自动评分和人工评分之间的差异百分比。句子校正算法的准确率高于使用这些语音材料通常获得的准确率。单词校正算法的表现优于人工操作员。这两种算法都可在实际中使用,并允许通过互联网对开放式语音材料进行言语接受测试。