https://journalprivacyconfidentiality.org/index.php/jpc/issue/feed Journal of Privacy and Confidentiality 2020-01-20T19:47:20-08:00 Lars Vilhuber managing-editor@journalprivacyconfidentiality.org Open Journal Systems <p>The <em>Journal of Privacy and Confidentiality</em>&nbsp;is an open-access multi-disciplinary journal whose purpose is to facilitate the coalescence of research methodologies and activities in the areas of privacy, confidentiality, and disclosure limitation. The JPC seeks to publish a wide range of research and review papers, not only from academia, but also from government (especially official statistical agencies) and industry, and to serve as a forum for exchange of views, discussion, and news.</p> https://journalprivacyconfidentiality.org/index.php/jpc/article/view/736 Editorial for Special Issue on the Theory and Practice of Differential Privacy 2018 2020-01-15T23:47:29-08:00 Aleksandar Nikolov anikolov@cs.toronto.edu Lars Vilhuber managing-editor@journalprivacyconfidentiality.org <p>This special issue&nbsp; includes selected contributions from the 4th Workshop on Theory and Practice of Differential Privacy, which was held in Toronto, Canada on 15 October 2018 as part of the ACM Conference on Computer Security (CCS).</p> 2020-01-11T09:28:48-08:00 Copyright (c) 2020 Aleksandar Nikolov https://journalprivacyconfidentiality.org/index.php/jpc/article/view/715 The Bounded Laplace Mechanism in Differential Privacy 2020-01-15T23:47:31-08:00 Naoise Holohan naoise.holohan@ibm.com Spiros Antonatos santonat@ie.ibm.com Stefano Braghin stefanob@ie.ibm.com Pól Mac Aonghusa aonghusa@ie.ibm.com <p>The Laplace mechanism is the workhorse of differential privacy, applied to&nbsp;many instances where numerical data is processed. However, the Laplace mechanism can&nbsp;return semantically impossible values, such as negative counts, due to its infinite support.&nbsp;There are two popular solutions to this: (i) bounding/capping the output values and (ii)&nbsp;bounding the mechanism support. In this paper, we show that bounding the mechanism&nbsp;support, while using the parameters of the standard Laplace mechanism, does not typically preserve differential privacy. We also present a robust method to compute the optimal&nbsp;mechanism parameters to achieve differential privacy in such a setting.</p> 2019-12-23T00:00:00-08:00 Copyright (c) 2019 Naoise Holohan, Spiros Antonatos, Stefano Braghin, Pól Mac Aonghusa https://journalprivacyconfidentiality.org/index.php/jpc/article/view/718 Local Differential Privacy for Evolving Data 2020-01-15T23:47:28-08:00 Matthew Joseph majos@cis.upenn.edu Aaron Roth aaroth@cis.upenn.edu Jonathan Ullman jullman@ccs.neu.edu Bo Waggoner bwag@colorado.edu <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>There are now several large scale deployments of differential privacy used to collect statistical information about users. However, these deployments periodically recollect the data and recompute the statistics using algorithms designed for a single use. As a result, these systems do not provide meaningful privacy guarantees over long time scales. Moreover, existing techniques to mitigate this effect do not apply in the “local model” of differential privacy that these systems use.</p> <p>In this paper, we introduce a new technique for local differential privacy that makes it possible to maintain up-to-date statistics over time, with privacy guarantees that degrade only in the number of changes in the underlying distribution rather than the number of collection periods. We use our technique for tracking a changing statistic in the setting where users are partitioned into an unknown collection of groups, and at every time period each user draws a single bit from a common (but changing) group-specific distribution. We also provide an application to frequency and heavy-hitter estimation.</p> </div> </div> </div> 2020-01-12T13:02:00-08:00 Copyright (c) 2020 Matthew Joseph, Aaron Roth, Jonathan Ullman, Bo Waggoner https://journalprivacyconfidentiality.org/index.php/jpc/article/view/711 Linear Program Reconstruction in Practice 2020-01-15T23:47:27-08:00 Aloni Cohen aloni.cohen@gmail.com Kobbi Nissim kobbi.nissim@georgetown.edu <p>We briefly report on a successful linear program reconstruction attack performed on a production statistical queries system and using a real dataset. The attack was deployed in test environment in the course of the Aircloak Challenge bug bounty program and is based on the reconstruction algorithm of Dwork, McSherry, and Talwar. We empirically evaluate the effectiveness of the algorithm and a related algorithm by Dinur and Nissim with various dataset sizes, error rates, and numbers of queries in a Gaussian noise setting.</p> 2020-01-14T17:55:19-08:00 Copyright (c) 2020 Aloni Cohen, Kobbi Nissim https://journalprivacyconfidentiality.org/index.php/jpc/article/view/725 Differentially Private Inference for Binomial Data 2020-01-20T19:24:35-08:00 Jordan Alexander Awan awan@psu.edu Aleksandra Slavkovic sesa@psu.edu <p>We derive uniformly most powerful (UMP) tests for simple and one-sided hypotheses for a population proportion within the framework of Differential Privacy (DP), optimizing finite sample performance. We show that in general, DP hypothesis tests can be written in terms of linear constraints, and for exchangeable data can always be expressed as a function of the empirical distribution. Using this structure, we prove a `Neyman-Pearson lemma' for binomial data under DP, where the DP-UMP only depends on the sample sum. Our tests can also be stated as a post-processing of a random variable, whose distribution we coin ``Truncated-Uniform-Laplace'' (Tulap), a generalization of the Staircase and discrete Laplace distributions. Furthermore, we obtain exact p-values, which are easily computed in terms of the Tulap random variable.</p> <p>Using the above techniques, we show that our tests can be applied to give uniformly most accurate one-sided confidence intervals and optimal confidence distributions. We also derive uniformly most powerful unbiased (UMPU) two-sided tests, which lead to uniformly most accurate unbiased (UMAU) two-sided confidence intervals. We show that our results can be applied to distribution-free hypothesis tests for continuous data. Our simulation results demonstrate that all our tests have exact type I error, and are more powerful than current techniques.</p> 2020-01-15T12:21:02-08:00 Copyright (c) 2020 Jordan Alexander Awan, Aleksandra Slavkovic https://journalprivacyconfidentiality.org/index.php/jpc/article/view/726 Privacy Profiles and Amplification by Subsampling 2020-01-20T19:47:20-08:00 Borja Balle borja.balle@gmail.com Gilles Barthe gilles.barthe@imdea.org Marco Gaboardi gaboardi@buffalo.edu <p>Differential privacy provides a robust quantifiable methodology to measure and control the privacy leakage of data analysis algorithms.<br>A fundamental insight is that by forcing algorithms to be randomized, their privacy leakage can be characterized by measuring the dissimilarity between output distributions produced by applying the algorithm to pairs datasets differing in one individual.<br>After the introduction of differential privacy, several variants of the original definition have been proposed by changing the measure of dissimilarity between distributions, including concentrated, zero-concentrated and R{\'e}nyi differential privacy.</p> <p>The first contribution of this paper is to introduce the notion of privacy profile of a mechanism.<br>This profile captures all valid $(\varepsilon,\delta)$ differential privacy parameters satisfied by a given mechanism, and contrasts with the usual approach of providing guarantees in terms of a single point in this curve.<br>We show that knowledge of this curve is equivalent to knowledge of the privacy guarantees with respect to the alternative definitions listed above.<br>This sheds further light into the connections between different privacy definitions, and suggests that these should be considered alternative but otherwise equivalent points of view.</p> <p>The second contribution of this paper is to apply the privacy profiles machinery to study the so-called ``privacy amplification by subsampling'' principle, which ensures that a differentially private mechanism run on a random subsample of a population provides higher privacy guarantees than when run on the entire population.<br>Several instances of this principle have been studied for different random subsampling methods, each with an ad-hoc analysis. In this paper we set out to study this phenomenon in detail with the aim to provide a general method capable of recovering prior analyses in a streamlined fashion.<br>Our method makes extensive use of coupling argument, and introduces a new tool to analyse differential privacy for mixture distributions.</p> 2020-01-15T19:02:00-08:00 Copyright (c) 2020 Borja Balle, Gilles Barthe, Marco Gaboardi https://journalprivacyconfidentiality.org/index.php/jpc/article/view/697 Program for TPDP 2018 2020-01-15T23:48:08-08:00 Aleksandar Nikolov anikolov@cs.toronto.edu Lars Vilhuber managing-editor@journalprivacyconfidentiality.org <p>Theory and Practice of Differential Privacy 2018 (TPDP 2018) was held on 15 October 2018 in Toronto, ON, Canada, as part of CCS 2018. This is the final program.</p> 2018-11-21T11:50:45-08:00 Copyright (c) 2018 Aleksandar Nikolov