Suppr超能文献

基于联邦学习的隐私保护语音抑郁诊断。

Privacy-preserving Speech-based Depression Diagnosis via Federated Learning.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:1371-1374. doi: 10.1109/EMBC48229.2022.9871861.

Abstract

Mental health disorders, such as depression, affect a large and growing number of populations worldwide, and they may cause severe emotional, behavioral and physical health problems if left untreated. As depression affects a patient's speech characteristics, recent studies have proposed to leverage deep-learning-powered speech analysis models for depression diagnosis, which often require centralized learning on the collected voice data. However, this centralized training requiring data to be stored at a server raises the risks of severe voice data breaches, and people may not be willing to share their speech data with third parties due to privacy concerns. To address these issues, in this paper, we demonstrate for the first time that speech-based depression diagnosis models can be trained in a privacy-preserving way using federated learning, which enables collaborative model training while keeping the private speech data decentralized on clients' devices. To ensure the model's robustness under attacks, we also integrate different FL defenses into the system, such as norm bounding, differential privacy, and secure aggregation mechanisms. Extensive experiments under various FL settings on the DAIC-WOZ dataset show that our FL model can achieve high performance without sacrificing much utility compared with centralized-learning approaches while ensuring users' speech data privacy. Clinical Relevance- The experiments were conducted on publicly available clinical datasets. No humans or animals were involved.

摘要

心理健康障碍,如抑郁症,影响着全球越来越多的人群,而且如果不加以治疗,可能会导致严重的情绪、行为和身体健康问题。由于抑郁症会影响患者的言语特征,最近的研究提出利用基于深度学习的语音分析模型进行抑郁症诊断,而这些模型通常需要对收集的语音数据进行集中学习。然而,这种集中式训练要求将数据存储在服务器上,会增加严重的语音数据泄露风险,而且由于隐私问题,人们可能不愿意将自己的语音数据与第三方共享。为了解决这些问题,在本文中,我们首次证明,使用联邦学习可以以隐私保护的方式训练基于语音的抑郁症诊断模型,这种方式可以在保持客户端设备上的私人语音数据去中心化的同时,实现协作模型训练。为了确保模型在攻击下的稳健性,我们还将不同的 FL 防御措施集成到系统中,例如范数边界、差分隐私和安全聚合机制。在 DAIC-WOZ 数据集上的各种 FL 设置下进行的广泛实验表明,与集中式学习方法相比,我们的 FL 模型在不牺牲太多效用的情况下可以实现高性能,同时确保用户的语音数据隐私。临床相关性 - 这些实验是在公开的临床数据集上进行的。没有涉及人类或动物。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验