Rethinking Disclosure Prevention with Pointwise Maximal Leakage

Main Article Content

Sara Saeidian
https://orcid.org/0000-0001-6908-559X
Giulia Cervia
Tobias J. Oechtering
https://orcid.org/0000-0002-0036-9049
Mikael Skoglund
https://orcid.org/0000-0002-7926-5081

Abstract

This paper introduces a paradigm shift in the way privacy is defined, driven by a novel interpretation of the fundamental result of Dwork and Naor about the impossibility of absolute disclosure prevention. We propose a general model of utility and privacy in which utility is achieved by disclosing the value of low-entropy features of a secret X, while privacy is maintained by hiding the value of high-entropy features of X. Adopting this model, we prove that contrary to popular opinion, it is possible to provide meaningful inferential privacy guarantees. These guarantees are given in terms of an operationally-meaningful information measure called pointwise maximal leakage (PML) and prevent privacy breaches against a large class of adversaries regardless of their prior beliefs about X. We show that PML-based privacy is compatible with and provides insights into existing notions such as differential privacy. We also argue that our new framework enables highly flexible mechanism designs, where the randomness of a mechanism can be adjusted to the entropy of the data, ultimately, leading to higher utility.

Article Details

How to Cite
Saeidian, Sara, Giulia Cervia, Tobias J. Oechtering, and Mikael Skoglund. 2025. “Rethinking Disclosure Prevention With Pointwise Maximal Leakage”. Journal of Privacy and Confidentiality 15 (1). https://doi.org/10.29012/jpc.893.
Section
Articles

Funding data