On the Meaning and Limits of Empirical Differential Privacy
Main Article Content
Empirical differential privacy (EDP) has been proposed as an alternative to differential privacy (DP), with the important advantages that the procedure can be applied to any bayesian model and requires less technical work from the part of the user. While EDP has been shown to be easy to implement, little is known of its theoretical underpinnings. This paper proposes a careful investigation of the meaning and limits of EDP as a measure of privacy. We show that EDP can not simply be considered an empirical version of DP, and that it could instead be thought of as a sensitivity measure on posterior distributions. We also show that EDP is not well-defined, in that its value depends crucially on the choice of discretization used in the procedure, and that it can be very computationnaly intensive to apply in practice. We illustrate these limitations with two simple conjugate bayesian model: the beta-binomial model and the normal-normal model.
Copyright is retained by the authors. By submitting to this journal, the author(s) license the article under the Creative Commons License – Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), unless choosing a more lenient license (for instance, public domain). For situations not allowed under CC BY-NC-ND, short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.
Authors of articles published by the journal grant the journal the right to store the articles in its databases for an unlimited period of time and to distribute and reproduce the articles electronically.
Grant numbers Globalink Research Internship
Natural Sciences and Engineering Research Council of Canada
Grant numbers RGPIN-435472-2013