Cao Yang, Yoshikawa Masatoshi, Xiao Yonghui, Xiong Li
Department of Math and Computer Science, Emory University, Atlanta, GA 30322.
Department of Social Informatics, Kyoto University, Kyoto 606-8501, Japan.
IEEE Trans Knowl Data Eng. 2019 Jul;31(7):1281-1295. doi: 10.1109/TKDE.2018.2824328. Epub 2018 Apr 9.
Differential Privacy (DP) has received increasing attention as a rigorous privacy framework. Many existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives to continuously release private data for protecting privacy at each time point (i.e., event-level privacy), which assume that the data at different time points are independent, or that adversaries do not have knowledge of correlation between data. However, continuously generated data tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations. First, we analyze the privacy leakage of a DP mechanism under temporal correlation that can be modeled using Markov Chain. Our analysis reveals that, the event-level privacy loss of a DP mechanism may . We call the unexpected (TPL). Although TPL may increase over time, we find that its supremum may exist in some cases. Second, we design efficient algorithms for calculating TPL. Third, we propose data releasing mechanisms that convert any existing DP mechanism into one against TPL. Experiments confirm that our approach is efficient and effective.
差分隐私(DP)作为一种严格的隐私框架受到了越来越多的关注。许多现有研究采用传统的DP机制(例如拉普拉斯机制)作为原语,以便在每个时间点持续发布私有数据以保护隐私(即事件级隐私),这些机制假定不同时间点的数据是独立的,或者对手不知道数据之间的相关性。然而,持续生成的数据往往在时间上具有相关性,并且这种相关性可能被对手获取。在本文中,我们研究了传统DP机制在时间相关性下潜在的隐私损失。首先,我们分析了可以用马尔可夫链建模的时间相关性下DP机制的隐私泄露情况。我们的分析表明,DP机制的事件级隐私损失可能……我们将其称为意外隐私损失(TPL)。尽管TPL可能会随时间增加,但我们发现在某些情况下它可能存在上确界。其次,我们设计了用于计算TPL的高效算法。第三,我们提出了数据发布机制,可将任何现有的DP机制转换为防止TPL的机制。实验证实我们的方法是高效且有效的。