Seyedi Salman, Griner Emily, Corbin Lisette, Jiang Zifan, Roberts Kailey, Iacobelli Luca, Milloy Aaron, Boazak Mina, Bahrami Rad Ali, Abbasi Ahmed, Cotes Robert O, Clifford Gari D
Department of Biomedical Informatics, Emory University, Atlanta, GA, United States.
Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta, GA, United States.
JMIR Ment Health. 2023 Oct 31;10:e48517. doi: 10.2196/48517.
BACKGROUND: Automatic speech recognition (ASR) technology is increasingly being used for transcription in clinical contexts. Although there are numerous transcription services using ASR, few studies have compared the word error rate (WER) between different transcription services among different diagnostic groups in a mental health setting. There has also been little research into the types of words ASR transcriptions mistakenly generate or omit. OBJECTIVE: This study compared the WER of 3 ASR transcription services (Amazon Transcribe [Amazon.com, Inc], Zoom-Otter AI [Zoom Video Communications, Inc], and Whisper [OpenAI Inc]) in interviews across 2 different clinical categories (controls and participants experiencing a variety of mental health conditions). These ASR transcription services were also compared with a commercial human transcription service, Rev (Rev.Com, Inc). Words that were either included or excluded by the error in the transcripts were systematically analyzed by their Linguistic Inquiry and Word Count categories. METHODS: Participants completed a 1-time research psychiatric interview, which was recorded on a secure server. Transcriptions created by the research team were used as the gold standard from which WER was calculated. The interviewees were categorized into either the control group (n=18) or the mental health condition group (n=47) using the Mini-International Neuropsychiatric Interview. The total sample included 65 participants. Brunner-Munzel tests were used for comparing independent sets, such as the diagnostic groupings, and Wilcoxon signed rank tests were used for correlated samples when comparing the total sample between different transcription services. RESULTS: There were significant differences between each ASR transcription service's WER (P<.001). Amazon Transcribe's output exhibited significantly lower WERs compared with the Zoom-Otter AI's and Whisper's ASR. ASR performances did not significantly differ across the 2 different clinical categories within each service (P>.05). A comparison between the human transcription service output from Rev and the best-performing ASR (Amazon Transcribe) demonstrated a significant difference (P<.001), with Rev having a slightly lower median WER (7.6%, IQR 5.4%-11.35 vs 8.9%, IQR 6.9%-11.6%). Heat maps and spider plots were used to visualize the most common errors in Linguistic Inquiry and Word Count categories, which were found to be within 3 overarching categories: Conversation, Cognition, and Function. CONCLUSIONS: Overall, consistent with previous literature, our results suggest that the WER between manual and automated transcription services may be narrowing as ASR services advance. These advances, coupled with decreased cost and time in receiving transcriptions, may make ASR transcriptions a more viable option within health care settings. However, more research is required to determine if errors in specific types of words impact the analysis and usability of these transcriptions, particularly for specific applications and in a variety of populations in terms of clinical diagnosis, literacy level, accent, and cultural origin.
背景:自动语音识别(ASR)技术在临床环境中的转录应用越来越广泛。尽管有众多使用ASR的转录服务,但很少有研究在心理健康环境下比较不同诊断组中不同转录服务之间的单词错误率(WER)。对于ASR转录错误生成或遗漏的单词类型也鲜有研究。 目的:本研究比较了3种ASR转录服务(亚马逊转录[亚马逊公司]、Zoom - Otter AI[Zoom视频通信公司]和Whisper[OpenAI公司])在2种不同临床类别(对照组和患有各种心理健康状况的参与者)访谈中的WER。这些ASR转录服务还与一家商业人工转录服务Rev(Rev.Com公司)进行了比较。通过语言查询和单词计数类别对转录本中因错误而包含或排除的单词进行了系统分析。 方法:参与者完成一次研究性精神科访谈,访谈记录保存在安全服务器上。研究团队创建的转录本用作计算WER的金标准。使用迷你国际神经精神病学访谈将受访者分为对照组(n = 18)或心理健康状况组(n = 47)。总样本包括65名参与者。Brunner - Munzel检验用于比较独立数据集,如诊断分组,而Wilcoxon符号秩检验用于比较不同转录服务之间的总样本时的相关样本。 结果:每种ASR转录服务的WER之间存在显著差异(P <.001)。与Zoom - Otter AI和Whisper的ASR相比,亚马逊转录的输出显示出显著更低 的WER。在每种服务中,2种不同临床类别之间的ASR性能没有显著差异(P >.05)。Rev的人工转录服务输出与表现最佳的ASR(亚马逊转录)之间的比较显示出显著差异(P <.001),Rev的中位数WER略低(7.6%,四分位距5.4% - 11.35%,而8.9%,四分位距6.9% - 11.6%)。热图和蜘蛛图用于可视化语言查询和单词计数类别中最常见的错误,这些错误被发现在3个总体类别中:对话、认知和功能。 结论:总体而言,与先前文献一致,我们的结果表明,随着ASR服务的进步,人工和自动转录服务之间的WER可能正在缩小。这些进步,再加上接收转录本的成本和时间减少,可能使ASR转录在医疗保健环境中成为更可行的选择。然而,需要更多研究来确定特定类型单词中的错误是否会影响这些转录本的分析和可用性,特别是对于特定应用以及在临床诊断、识字水平、口音和文化背景等不同人群中的情况。
J Speech Lang Hear Res. 2024-9-12
J Am Med Inform Assoc. 2023-2-16
J Am Med Inform Assoc. 2023-3-16
Disabil Rehabil Assist Technol. 2023-10
Healthcare (Basel). 2025-6-5
PLOS Digit Health. 2024-7-24
IEEE J Biomed Health Inform. 2024-3
Acad Psychiatry. 2022-8
Behav Res Methods. 2022-4
NPJ Digit Med. 2021-3-26
NPJ Digit Med. 2020-6-3