Fredrikson Matthew, Lantz Eric, Jha Somesh, Lin Simon, Page David, Ristenpart Thomas
University of Wisconsin.
Marshfield Clinic Research Foundation.
Proc USENIX Secur Symp. 2014 Aug;2014:17-32.
We initiate the study of privacy in pharmacogenetics, wherein machine learning models are used to guide medical treatments based on a patient's genotype and background. Performing an in-depth case study on privacy in personalized warfarin dosing, we show that suggested models carry privacy risks, in particular because attackers can perform what we call : an attacker, given the model and some demographic information about a patient, can predict the patient's genetic markers. As differential privacy (DP) is an oft-proposed solution for medical settings such as this, we evaluate its effectiveness for building private versions of pharmacogenetic models. We show that . We go on to analyze the impact on utility by performing simulated clinical trials with DP dosing models. We find that for privacy budgets effective at preventing attacks, . We conclude that DP mechanisms do not simultaneously improve genomic privacy while retaining desirable clinical efficacy, highlighting the need for new mechanisms that should be evaluated using the general methodology introduced by our work.
我们开启了药物遗传学中隐私问题的研究,其中机器学习模型用于根据患者的基因型和背景来指导医疗治疗。通过对个性化华法林剂量设定中的隐私问题进行深入案例研究,我们表明所建议的模型存在隐私风险,特别是因为攻击者能够实施我们所谓的行为:给定模型和关于患者的一些人口统计学信息,攻击者能够预测患者的基因标记。由于差分隐私(DP)是针对此类医疗场景经常提出的一种解决方案,我们评估了其在构建药物遗传学模型的隐私版本方面的有效性。我们表明……我们接着通过使用DP剂量模型进行模拟临床试验来分析对效用的影响。我们发现,对于有效防止攻击的隐私预算,……我们得出结论,DP机制在保留理想临床疗效的同时并不能同时改善基因组隐私,这凸显了需要采用我们工作中引入的通用方法来评估的新机制。