Trauma Research Center, Nursing Faculty, Baqiyatallah University of Medical Sciences, Tehran, Iran.
Data Analytics, Scientific Information Database (SID), Tehran, Iran.
J Acoust Soc Am. 2021 Sep;150(3):1945. doi: 10.1121/10.0006104.
This study aimed to develop an artificial intelligence (AI)-based tool for screening COVID-19 patients based on the acoustic parameters of their voices. Twenty-five acoustic parameters were extracted from voice samples of 203 COVID-19 patients and 171 healthy individuals who produced a sustained vowel, i.e., /a/, as long as they could after a deep breath. The selected acoustic parameters were from different categories including fundamental frequency and its perturbation, harmonicity, vocal tract function, airflow sufficiency, and periodicity. After the feature extraction, different machine learning methods were tested. A leave-one-subject-out validation scheme was used to tune the hyper-parameters and record the test set results. Then the models were compared based on their accuracy, precision, recall, and F1-score. Based on accuracy (89.71%), recall (91.63%), and F1-score (90.62%), the best model was the feedforward neural network (FFNN). Its precision function (89.63%) was a bit lower than the logistic regression (90.17%). Based on these results and confusion matrices, the FFNN model was employed in the software. This screening tool could be practically used at home and public places to ensure the health of each individual's respiratory system. If there are any related abnormalities in the test taker's voice, the tool recommends that they seek a medical consultant.
本研究旨在开发一种基于 COVID-19 患者声音声学参数的人工智能 (AI) 筛查工具。从 203 名 COVID-19 患者和 171 名健康个体的持续元音(即/a/)声音样本中提取了 25 个声学参数。这些个体在深呼吸后尽可能长时间地发出/a/音。选择的声学参数来自不同类别,包括基频及其微扰、谐和度、声道功能、气流充足度和周期性。特征提取后,测试了不同的机器学习方法。采用留一受试者验证方案来调整超参数并记录测试集结果。然后,根据准确性、精度、召回率和 F1 分数对模型进行比较。基于准确性(89.71%)、召回率(91.63%)和 F1 分数(90.62%),最佳模型是前馈神经网络(FFNN)。其精度函数(89.63%)略低于逻辑回归(90.17%)。基于这些结果和混淆矩阵,将 FFNN 模型应用于软件中。该筛查工具可在家中和公共场所实际使用,以确保每个人呼吸系统的健康。如果测试者的声音有任何相关异常,该工具建议他们咨询医疗顾问。