Towards a Systematic Analysis of Privacy Definitions
Main Article Content
Abstract
In statistical privacy, a privacy definition is regarded as a set of algorithms that are allowed to process sensitive data. It is often helpful to consider the complementary view that privacy definitions are also contracts that guide the behavior of algorithms that take in sensitive data and produce sanitized data. Historically, data privacy breaches have been the result of fundamental misunderstandings about what a particular privacy definition guarantees.
Privacy definitions are often analyzed using a highly targeted approach: a specific attack strategy is evaluated to determine if a specific type of information can be inferred. If the attack works, one can conclude that the privacy definition is too weak. If it doesn't work, one often gains little information about its security (perhaps a slightly different attack would have worked?). Furthermore, these strategies will not identify cases where a privacy definition protects unnecessary pieces of information.
On the other hand, technical results concerning generalizable and systematic analyses of privacy are few in number, but such results have significantly advanced our understanding of the design of privacy definitions. We add to this literature with a novel methodology for analyzing the Bayesian properties of a privacy definition. Its goal is to identify precisely the type of information being protected, hence making it easier to identify (and later remove) unnecessary data protections.
Using privacy building blocks (which we refer to as axioms), we turn questions about semantics into mathematical problems -- the construction of a consistent normal form and the subsequent construction of the row cone (which is a geometric object that encapsulates Bayesian guarantees provided by a privacy definition).
We apply these ideas to study randomized response, FRAPP/PRAM, and several algorithms that add integer-valued noise to their inputs; we show that their privacy properties can be stated in terms of the protection of various notions of parity of a dataset. Randomized response, in particular, provides unnecessarily strong protections for parity, and so we also show how our methodology can be used to relax privacy definitions.
Article Details
Copyright is retained by the authors. By submitting to this journal, the author(s) license the article under the Creative Commons License – Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), unless choosing a more lenient license (for instance, public domain). For situations not allowed under CC BY-NC-ND, short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.
Authors of articles published by the journal grant the journal the right to store the articles in its databases for an unlimited period of time and to distribute and reproduce the articles electronically.
Funding data
-
National Science Foundation
Grant numbers 1054389