• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探索移动健康中隐私与效用的关系:通过联邦学习、差分隐私和外部攻击的模拟算法开发和验证。

Exploring the Relationship Between Privacy and Utility in Mobile Health: Algorithm Development and Validation via Simulations of Federated Learning, Differential Privacy, and External Attacks.

机构信息

Department of Statistics, University of Michigan, Ann Arbor, MI, United States.

Department of Statistics and Data Science, Carnegie Mellon University, Pittsburgh, PA, United States.

出版信息

J Med Internet Res. 2023 Apr 20;25:e43664. doi: 10.2196/43664.

DOI:10.2196/43664
PMID:37079370
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10160928/
Abstract

BACKGROUND

Although evidence supporting the feasibility of large-scale mobile health (mHealth) systems continues to grow, privacy protection remains an important implementation challenge. The potential scale of publicly available mHealth applications and the sensitive nature of the data involved will inevitably attract unwanted attention from adversarial actors seeking to compromise user privacy. Although privacy-preserving technologies such as federated learning (FL) and differential privacy (DP) offer strong theoretical guarantees, it is not clear how such technologies actually perform under real-world conditions.

OBJECTIVE

Using data from the University of Michigan Intern Health Study (IHS), we assessed the privacy protection capabilities of FL and DP against the trade-offs in the associated model's accuracy and training time. Using a simulated external attack on a target mHealth system, we aimed to measure the effectiveness of such an attack under various levels of privacy protection on the target system and measure the costs to the target system's performance associated with the chosen levels of privacy protection.

METHODS

A neural network classifier that attempts to predict IHS participant daily mood ecological momentary assessment score from sensor data served as our target system. An external attacker attempted to identify participants whose average mood ecological momentary assessment score is lower than the global average. The attack followed techniques in the literature, given the relevant assumptions about the abilities of the attacker. For measuring attack effectiveness, we collected attack success metrics (area under the curve [AUC], positive predictive value, and sensitivity), and for measuring privacy costs, we calculated the target model training time and measured the model utility metrics. Both sets of metrics are reported under varying degrees of privacy protection on the target.

RESULTS

We found that FL alone does not provide adequate protection against the privacy attack proposed above, where the attacker's AUC in determining which participants exhibit lower than average mood is over 0.90 in the worst-case scenario. However, under the highest level of DP tested in this study, the attacker's AUC fell to approximately 0.59 with only a 10% point decrease in the target's R and a 43% increase in model training time. Attack positive predictive value and sensitivity followed similar trends. Finally, we showed that participants in the IHS most likely to require strong privacy protection are also most at risk from this particular privacy attack and subsequently stand to benefit the most from these privacy-preserving technologies.

CONCLUSIONS

Our results demonstrated both the necessity of proactive privacy protection research and the feasibility of the current FL and DP methods implemented in a real mHealth scenario. Our simulation methods characterized the privacy-utility trade-off in our mHealth setup using highly interpretable metrics, providing a framework for future research into privacy-preserving technologies in data-driven health and medical applications.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/c52a83782602/jmir_v25i1e43664_fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/d1aab244464f/jmir_v25i1e43664_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/e1ca6a676f8d/jmir_v25i1e43664_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/0dc45b54ed15/jmir_v25i1e43664_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/f54fcdf3d765/jmir_v25i1e43664_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/11a432119d1c/jmir_v25i1e43664_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/b368c8770770/jmir_v25i1e43664_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/3fa362082fc8/jmir_v25i1e43664_fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/5a34405a26bf/jmir_v25i1e43664_fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/6db0797672f2/jmir_v25i1e43664_fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/c52a83782602/jmir_v25i1e43664_fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/d1aab244464f/jmir_v25i1e43664_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/e1ca6a676f8d/jmir_v25i1e43664_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/0dc45b54ed15/jmir_v25i1e43664_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/f54fcdf3d765/jmir_v25i1e43664_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/11a432119d1c/jmir_v25i1e43664_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/b368c8770770/jmir_v25i1e43664_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/3fa362082fc8/jmir_v25i1e43664_fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/5a34405a26bf/jmir_v25i1e43664_fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/6db0797672f2/jmir_v25i1e43664_fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7587/10160928/c52a83782602/jmir_v25i1e43664_fig10.jpg
摘要

背景

尽管越来越多的证据支持大规模移动医疗(mHealth)系统的可行性,但隐私保护仍然是一个重要的实施挑战。公开的 mHealth 应用程序的潜在规模和所涉及数据的敏感性将不可避免地吸引敌对行为者的关注,这些行为者试图损害用户隐私。尽管联邦学习(FL)和差分隐私(DP)等隐私保护技术提供了强有力的理论保证,但在实际情况下,这些技术的实际表现尚不清楚。

目的

我们使用密歇根大学实习健康研究(IHS)的数据,评估了 FL 和 DP 在权衡相关模型的准确性和训练时间方面的隐私保护能力。通过对目标 mHealth 系统进行模拟的外部攻击,我们旨在衡量在目标系统的各种隐私保护级别下,这种攻击的有效性,并衡量与所选隐私保护级别相关的目标系统性能的成本。

方法

一个试图从传感器数据预测 IHS 参与者每日情绪生态瞬间评估得分的神经网络分类器作为我们的目标系统。一个外部攻击者试图识别平均情绪生态瞬间评估得分低于全球平均水平的参与者。攻击采用了文献中的技术,考虑到攻击者能力的相关假设。为了衡量攻击的有效性,我们收集了攻击成功指标(曲线下面积[AUC]、阳性预测值和敏感性),为了衡量隐私成本,我们计算了目标模型的训练时间,并衡量了模型的实用指标。在目标系统的不同隐私保护级别下,报告了这两组指标。

结果

我们发现,仅 FL 并不能为上述隐私攻击提供充分的保护,在最糟糕的情况下,攻击者确定哪些参与者表现出较低情绪的 AUC 超过 0.90。然而,在本研究中测试的最高 DP 级别下,攻击者的 AUC 降至约 0.59,仅使目标的 R 值降低了 10 个百分点,模型训练时间增加了 43%。攻击阳性预测值和敏感性也呈现出类似的趋势。最后,我们表明,IHS 中最需要强隐私保护的参与者也最容易受到这种特定隐私攻击的威胁,因此最能从这些隐私保护技术中受益。

结论

我们的结果既证明了主动隐私保护研究的必要性,也证明了当前在真实 mHealth 场景中实施的 FL 和 DP 方法的可行性。我们的模拟方法使用高度可解释的指标来描述我们的 mHealth 设置中的隐私-效用权衡,为未来数据驱动的健康和医疗应用中的隐私保护技术研究提供了框架。

相似文献

1
Exploring the Relationship Between Privacy and Utility in Mobile Health: Algorithm Development and Validation via Simulations of Federated Learning, Differential Privacy, and External Attacks.探索移动健康中隐私与效用的关系:通过联邦学习、差分隐私和外部攻击的模拟算法开发和验证。
J Med Internet Res. 2023 Apr 20;25:e43664. doi: 10.2196/43664.
2
Learning From Others Without Sacrificing Privacy: Simulation Comparing Centralized and Federated Machine Learning on Mobile Health Data.从他人身上学习而不牺牲隐私:移动健康数据集中式和联邦机器学习的模拟比较。
JMIR Mhealth Uhealth. 2021 Mar 30;9(3):e23728. doi: 10.2196/23728.
3
Federated Machine Learning, Privacy-Enhancing Technologies, and Data Protection Laws in Medical Research: Scoping Review.联邦机器学习、隐私增强技术和医疗研究中的数据保护法规:范围综述。
J Med Internet Res. 2023 Mar 30;25:e41588. doi: 10.2196/41588.
4
mHealth Systems Need a Privacy-by-Design Approach: Commentary on "Federated Machine Learning, Privacy-Enhancing Technologies, and Data Protection Laws in Medical Research: Scoping Review".移动医疗系统需要采用隐私设计方法:评论文“医学研究中的联邦机器学习、隐私增强技术和数据保护法:范围综述”。
J Med Internet Res. 2023 Mar 30;25:e46700. doi: 10.2196/46700.
5
The FeatureCloud Platform for Federated Learning in Biomedicine: Unified Approach.FeatureCloud 平台在生物医学领域的联邦学习:统一方法。
J Med Internet Res. 2023 Jul 12;25:e42621. doi: 10.2196/42621.
6
Applications of Federated Learning in Mobile Health: Scoping Review.联邦学习在移动医疗中的应用:范围综述。
J Med Internet Res. 2023 May 1;25:e43006. doi: 10.2196/43006.
7
Analysis of Privacy-Enhancing Technologies in Open-Source Federated Learning Frameworks for Driver Activity Recognition.分析用于驾驶员活动识别的开源联邦学习框架中的隐私增强技术。
Sensors (Basel). 2022 Apr 13;22(8):2983. doi: 10.3390/s22082983.
8
Do Gradient Inversion Attacks Make Federated Learning Unsafe?梯度反转攻击是否使联邦学习变得不安全?
IEEE Trans Med Imaging. 2023 Jul;42(7):2044-2056. doi: 10.1109/TMI.2023.3239391. Epub 2023 Jun 30.
9
A Two-Stage Differential Privacy Scheme for Federated Learning Based on Edge Intelligence.基于边缘智能的联邦学习两阶段差分隐私方案。
IEEE J Biomed Health Inform. 2024 Jun;28(6):3349-3360. doi: 10.1109/JBHI.2023.3306425. Epub 2024 Jun 6.
10
Sensor-Based mHealth Authentication for Real-Time Remote Healthcare Monitoring System: A Multilayer Systematic Review.基于传感器的移动健康认证在实时远程医疗监测系统中的应用:一项多层次系统综述。
J Med Syst. 2019 Jan 6;43(2):33. doi: 10.1007/s10916-018-1149-5.

引用本文的文献

1
An investigation into the acceptance of intelligent care systems: an extended technology acceptance model (TAM).智能护理系统的接受度调查:扩展技术接受模型(TAM)
Sci Rep. 2025 May 23;15(1):17912. doi: 10.1038/s41598-025-02746-w.
2
Balancing Between Privacy and Utility for Affect Recognition Using Multitask Learning in Differential Privacy-Added Federated Learning Settings: Quantitative Study.在添加差分隐私的联邦学习设置中使用多任务学习进行情感识别时隐私与效用之间的平衡:定量研究
JMIR Ment Health. 2024 Dec 23;11:e60003. doi: 10.2196/60003.
3
mHealth Systems Need a Privacy-by-Design Approach: Commentary on "Federated Machine Learning, Privacy-Enhancing Technologies, and Data Protection Laws in Medical Research: Scoping Review".

本文引用的文献

1
Federated learning for predicting clinical outcomes in patients with COVID-19.基于联邦学习的 COVID-19 患者临床结局预测
Nat Med. 2021 Oct;27(10):1735-1743. doi: 10.1038/s41591-021-01506-3. Epub 2021 Sep 15.
2
Exploring the Shift in International Trends in Mobile Health Research From 2000 to 2020: Bibliometric Analysis.探索 2000 年至 2020 年移动医疗研究国际趋势的转变:文献计量分析。
JMIR Mhealth Uhealth. 2021 Sep 8;9(9):e31097. doi: 10.2196/31097.
3
Differential privacy in health research: A scoping review.健康研究中的差分隐私:范围综述。
移动医疗系统需要采用隐私设计方法:评论文“医学研究中的联邦机器学习、隐私增强技术和数据保护法:范围综述”。
J Med Internet Res. 2023 Mar 30;25:e46700. doi: 10.2196/46700.
J Am Med Inform Assoc. 2021 Sep 18;28(10):2269-2276. doi: 10.1093/jamia/ocab135.
4
Learning From Others Without Sacrificing Privacy: Simulation Comparing Centralized and Federated Machine Learning on Mobile Health Data.从他人身上学习而不牺牲隐私:移动健康数据集中式和联邦机器学习的模拟比较。
JMIR Mhealth Uhealth. 2021 Mar 30;9(3):e23728. doi: 10.2196/23728.
5
Federated Learning on Clinical Benchmark Data: Performance Assessment.基于临床基准数据的联邦学习:性能评估。
J Med Internet Res. 2020 Oct 26;22(10):e20891. doi: 10.2196/20891.
6
Blockchain-Enabled Contextual Online Learning under Local Differential Privacy for Coronary Heart Disease Diagnosis in Mobile Edge Computing.移动边缘计算中基于局部差分隐私的区块链上下文在线学习用于冠心病诊断
IEEE J Biomed Health Inform. 2020 Jun 2;PP. doi: 10.1109/JBHI.2020.2999497.
7
Representation transfer for differentially private drug sensitivity prediction.基于表示转移的差分隐私药物敏感性预测。
Bioinformatics. 2019 Jul 15;35(14):i218-i224. doi: 10.1093/bioinformatics/btz373.
8
Federated learning of predictive models from federated Electronic Health Records.从联邦电子健康记录中联合学习预测模型。
Int J Med Inform. 2018 Apr;112:59-67. doi: 10.1016/j.ijmedinf.2018.01.007. Epub 2018 Jan 12.
9
Privacy and Security in Mobile Health: A Research Agenda.移动健康中的隐私与安全:一项研究议程。
Computer (Long Beach Calif). 2016 Jun;49(6):22-30. doi: 10.1109/MC.2016.185. Epub 2016 Jun 13.
10
Multiple imputation by chained equations: what is it and how does it work?多重链结方程插补法:是什么,以及它如何运作?
Int J Methods Psychiatr Res. 2011 Mar;20(1):40-9. doi: 10.1002/mpr.329.