Kulangareth Nikhil Valsan, Kaufman Jaycee, Oreskovic Jessica, Fossat Yan
Klick Labs, Toronto, ON, Canada.
JMIR Biomed Eng. 2024 Mar 21;9:e56245. doi: 10.2196/56245.
The digital era has witnessed an escalating dependence on digital platforms for news and information, coupled with the advent of "deepfake" technology. Deepfakes, leveraging deep learning models on extensive data sets of voice recordings and images, pose substantial threats to media authenticity, potentially leading to unethical misuse such as impersonation and the dissemination of false information.
To counteract this challenge, this study aims to introduce the concept of innate biological processes to discern between authentic human voices and cloned voices. We propose that the presence or absence of certain perceptual features, such as pauses in speech, can effectively distinguish between cloned and authentic audio.
A total of 49 adult participants representing diverse ethnic backgrounds and accents were recruited. Each participant contributed voice samples for the training of up to 3 distinct voice cloning text-to-speech models and 3 control paragraphs. Subsequently, the cloning models generated synthetic versions of the control paragraphs, resulting in a data set consisting of up to 9 cloned audio samples and 3 control samples per participant. We analyzed the speech pauses caused by biological actions such as respiration, swallowing, and cognitive processes. Five audio features corresponding to speech pause profiles were calculated. Differences between authentic and cloned audio for these features were assessed, and 5 classical machine learning algorithms were implemented using these features to create a prediction model. The generalization capability of the optimal model was evaluated through testing on unseen data, incorporating a model-naive generator, a model-naive paragraph, and model-naive participants.
Cloned audio exhibited significantly increased time between pauses (P<.001), decreased variation in speech segment length (P=.003), increased overall proportion of time speaking (P=.04), and decreased rates of micro- and macropauses in speech (both P=.01). Five machine learning models were implemented using these features, with the AdaBoost model demonstrating the highest performance, achieving a 5-fold cross-validation balanced accuracy of 0.81 (SD 0.05). Other models included support vector machine (balanced accuracy 0.79, SD 0.03), random forest (balanced accuracy 0.78, SD 0.04), logistic regression, and decision tree (balanced accuracies 0.76, SD 0.10 and 0.72, SD 0.06). When evaluating the optimal AdaBoost model, it achieved an overall test accuracy of 0.79 when predicting unseen data.
The incorporation of perceptual, biological features into machine learning models demonstrates promising results in distinguishing between authentic human voices and cloned audio.
数字时代见证了人们对数字平台获取新闻和信息的依赖不断升级,同时“深度伪造”技术也应运而生。深度伪造利用深度学习模型处理大量语音记录和图像数据集,对媒体真实性构成了重大威胁,可能导致诸如假冒和传播虚假信息等不道德的滥用行为。
为应对这一挑战,本研究旨在引入先天生物过程的概念,以辨别真实人类声音和克隆声音。我们提出,某些感知特征的存在与否,如语音中的停顿,能够有效区分克隆音频和真实音频。
招募了49名代表不同种族背景和口音的成年参与者。每位参与者提供语音样本,用于训练多达3个不同的语音克隆文本转语音模型和3个对照段落。随后,克隆模型生成对照段落的合成版本,从而形成一个数据集,每位参与者的数据集包含多达9个克隆音频样本和3个对照样本。我们分析了由呼吸、吞咽和认知过程等生物行为引起的语音停顿。计算了与语音停顿特征相对应的5个音频特征。评估了这些特征在真实音频和克隆音频之间的差异,并使用这些特征实施了5种经典机器学习算法来创建预测模型。通过对未见数据进行测试,包括模型无关的生成器、模型无关的段落和模型无关的参与者,评估了最优模型的泛化能力。
克隆音频的停顿间隔时间显著增加(P<.001),语音片段长度的变化减少(P=.003),说话总时间比例增加(P=.04),语音中的微停顿和大停顿发生率降低(P均为.01)。利用这些特征实施了5种机器学习模型,其中AdaBoost模型表现最佳,在5折交叉验证中平衡准确率达到0.81(标准差0.05)。其他模型包括支持向量机(平衡准确率0.79,标准差0.03)、随机森林(平衡准确率0.78,标准差0.04)、逻辑回归和决策树(平衡准确率分别为0.76,标准差0.10和0.72,标准差0.06)。在评估最优AdaBoost模型时,对未见数据进行预测时其总体测试准确率达到0.79。
将感知生物特征纳入机器学习模型在区分真实人类声音和克隆音频方面显示出了有前景的结果。