Department of Computing, University of Turku, Turku, Finland.
Methods Inf Med. 2024 May;63(1-02):35-51. doi: 10.1055/a-2385-1355. Epub 2024 Aug 13.
Synthetic data have been proposed as a solution for sharing anonymized versions of sensitive biomedical datasets. Ideally, synthetic data should preserve the structure and statistical properties of the original data, while protecting the privacy of the individual subjects. Differential Privacy (DP) is currently considered the gold standard approach for balancing this trade-off.
The aim of this study is to investigate how trustworthy are group differences discovered by independent sample tests from DP-synthetic data. The evaluation is carried out in terms of the tests' Type I and Type II errors. With the former, we can quantify the tests' validity, i.e., whether the probability of false discoveries is indeed below the significance level, and the latter indicates the tests' power in making real discoveries.
We evaluate the Mann-Whitney U test, Student's -test, chi-squared test, and median test on DP-synthetic data. The private synthetic datasets are generated from real-world data, including a prostate cancer dataset ( = 500) and a cardiovascular dataset ( = 70,000), as well as on bivariate and multivariate simulated data. Five different DP-synthetic data generation methods are evaluated, including two basic DP histogram release methods and MWEM, Private-PGM, and DP GAN algorithms.
A large portion of the evaluation results expressed dramatically inflated Type I errors, especially at levels of ≤ 1. This result calls for caution when releasing and analyzing DP-synthetic data: low -values may be obtained in statistical tests simply as a byproduct of the noise added to protect privacy. A DP Smoothed Histogram-based synthetic data generation method was shown to produce valid Type I error for all privacy levels tested but required a large original dataset size and a modest privacy budget ( ≥ 5) in order to have reasonable Type II error levels.
合成数据被提议作为共享敏感生物医学数据集匿名版本的解决方案。理想情况下,合成数据应保留原始数据的结构和统计特性,同时保护个体主体的隐私。差分隐私 (DP) 目前被认为是平衡这种权衡的黄金标准方法。
本研究旨在调查从 DP 合成数据中进行独立样本检验发现的组间差异的可信度。评估是根据检验的 I 型和 II 型错误进行的。前者可以量化检验的有效性,即错误发现的概率是否确实低于显著水平,后者表示检验在做出真实发现方面的能力。
我们评估了 DP 合成数据上的曼-惠特尼 U 检验、学生 t 检验、卡方检验和中位数检验。私人合成数据集是从真实世界的数据中生成的,包括前列腺癌数据集(n=500)和心血管数据集(n=70000),以及二元和多元模拟数据。评估了五种不同的 DP 合成数据生成方法,包括两种基本的 DP 直方图发布方法以及 MWEM、Private-PGM 和 DP GAN 算法。
评估结果的很大一部分表示 I 型错误显著膨胀,尤其是在水平为 ≤1 时。这一结果呼吁在发布和分析 DP 合成数据时保持谨慎:在统计检验中可能会获得低 - 值,这仅仅是为了保护隐私而添加的噪声的副产品。基于 DP 平滑直方图的合成数据生成方法显示,对于所有测试的隐私级别,都能产生有效的 I 型错误,但需要较大的原始数据集大小和适度的隐私预算(≥5),才能达到合理的 II 型错误水平。