https://journalprivacyconfidentiality.org/index.php/jpc/issue/feed Journal of Privacy and Confidentiality 2024-02-11T08:33:57-08:00 Lars Vilhuber and/or Rachel Cummings managing-editor@journalprivacyconfidentiality.org Open Journal Systems <p>The <em>Journal of Privacy and Confidentiality</em>&nbsp;is an open-access multi-disciplinary journal whose purpose is to facilitate the coalescence of research methodologies and activities in the areas of privacy, confidentiality, and disclosure limitation. The JPC seeks to publish a wide range of research and review papers, not only from academia, but also from government (especially official statistical agencies) and industry, and to serve as a forum for exchange of views, discussion, and news.</p> https://journalprivacyconfidentiality.org/index.php/jpc/article/view/850 The Bounded Gaussian Mechanism for Differential Privacy 2023-01-12T11:41:02-08:00 Bo Chen bo.chen@ufl.edu Matthew Hale matthewhale@ufl.edu <div class="textLayer"><span dir="ltr" role="presentation">The Gaussian mechanism is one differential privacy mechanism commonly </span><span dir="ltr" role="presentation">used to protect numerical data. However, it may be ill-suited to some applications because </span><span dir="ltr" role="presentation">it has unbounded support and thus can produce invalid numerical answers to queries, such </span><span dir="ltr" role="presentation">as negative ages or human heights in the tens of meters. One can project such private </span><span dir="ltr" role="presentation">values onto valid ranges of data, though such projections lead to the accumulation of private </span><span dir="ltr" role="presentation">query responses at the boundaries of such ranges, thereby harming accuracy. Motivated </span><span dir="ltr" role="presentation">by the need for both privacy and accuracy over bounded domains, we present a bounded </span><span dir="ltr" role="presentation">Gaussian mechanism for differential privacy, which has support only on a given region. </span><span dir="ltr" role="presentation">We present both univariate and multivariate versions of this mechanism and illustrate a </span><span dir="ltr" role="presentation">significant reduction in variance relative to comparable existing work.</span></div> 2024-02-11T00:00:00-08:00 Copyright (c) 2024 Bo Chen, Matthew Hale https://journalprivacyconfidentiality.org/index.php/jpc/article/view/820 Differentially Private Guarantees for Analytics and Machine Learning on Graphs: A Survey of Results 2023-03-31T16:21:20-07:00 Tamara T. Mueller tamara.mueller@tum.de Dmitrii Usynin dmitrii.usynin@tum.de Johannes C. Paetzold johannes.paetzold@tum.de Rickmer Braren rbraren@tum.de Daniel Rueckert daniel.rueckert@tum.de Georgios Kaissis g.kaissis@tum.de <p>We study the applications of differential privacy (DP) in the context of graph-structured data and discuss the formulations of DP applicable to the publication of graphs and their associated statistics as well as machine learning on graph-based data, including graph neural networks (GNNs). Interpreting DP guarantees in the context of graph-structured data can be challenging, as individual data points are interconnected (often non-linearly or sparsely). This connectivity complicates the computation of individual privacy loss in differentially private learning. The problem is exacerbated by an absence of a single, well-established formulation of DP in graph settings. This issue extends to the domain of GNNs, rendering private machine learning on graph-structured data a challenging task. A lack of prior systematisation work motivated us to study graph-based learning from a privacy perspective. In this work, we systematise different formulations of DP on graphs, discuss challenges and promising applications, including the GNN domain. We compare and separate works into graph analytics tasks and graph learning tasks with GNNs. We conclude our work with a discussion of open questions and potential directions for further research in this area.</p> 2024-02-11T00:00:00-08:00 Copyright (c) 2024 Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Rickmer Braren, Daniel Rueckert, Georgios Kaissis https://journalprivacyconfidentiality.org/index.php/jpc/article/view/823 Exact Privacy Analysis of the Gaussian Sparse Histogram Mechanism 2023-06-15T07:20:46-07:00 Arjun Wilkins awilkins@fb.com Daniel Kifer duk17@psu.edu Danfeng Zhang dbz5017@psu.edu Brian Karrer briankarrer@fb.com <p>Sparse histogram methods can be useful for returning differentially private counts of items in large or infinite histograms, large group-by queries, and more generally, releasing a set of statistics with sufficient item counts. We consider the Gaussian version of the sparse histogram mechanism and study the exact epsilon, delta differential privacy guarantees satisfied by this mechanism. We compare these exact epsilon, delta parameters to the simpler overestimates used in prior work to quantify the impact of their looser privacy bounds.</p> 2024-02-11T00:00:00-08:00 Copyright (c) 2024 Arjun Wilkins, Daniel Kifer, Danfeng Zhang, Brian Karrer https://journalprivacyconfidentiality.org/index.php/jpc/article/view/869 Private Convex Optimization via Exponential Mechanism 2023-04-18T10:00:51-07:00 Sivakanth Gopi sigopi@microsoft.com Yin Tat Lee yintat@uw.edu Daogao Liu liudaogao@gmail.com <p>In this paper, we study private optimization problems for non-smooth convex functions $F(x)=\mathbb{E}_i f_i(x)$ on $\mathbb{R}^d$.<br>We show that modifying the exponential mechanism by adding an $\ell_2^2$ regularizer to $F(x)$ and sampling from $\pi(x)\propto \exp(-k(F(x)+\mu\|x\|_2^2/2))$ recovers both the known optimal empirical risk and population loss under $(\eps,\delta)$-DP. Furthermore, we show how to implement this mechanism using $\widetilde{O}(n \min(d, n))$ queries to $f_i(x)$ for the DP-SCO where $n$ is the number of samples/users and $d$ is the ambient dimension.<br>We also give a (nearly) matching lower bound $\widetilde{\Omega}(n \min(d, n))$ on the number of evaluation queries.</p> <p>Our results utilize the following tools that are of independent interest:<br>\begin{itemize}<br>\item We prove Gaussian Differential Privacy (GDP) of the exponential mechanism if the loss function is strongly convex and the perturbation is Lipschitz. Our privacy bound is \emph{optimal} as it includes the privacy of Gaussian mechanism as a special case and is proved using the isoperimetric inequality for strongly log-concave measures.<br>\item We show how to sample from $\exp(-F(x)-\mu \|x\|^2_2/2)$ for $G$-Lipschitz $F$ with $\eta$ error in total variation (TV) distance using $\widetilde{O}((G^2/\mu) \log^2(d/\eta))$ unbiased queries to $F(x)$. This is the first sampler whose query complexity has \emph{polylogarithmic dependence} on both dimension $d$ and accuracy $\eta$.<br>\end{itemize}</p> 2024-02-11T00:00:00-08:00 Copyright (c) 2024 Sivakanth Gopi, Yin Tat Lee, Daogao Liu https://journalprivacyconfidentiality.org/index.php/jpc/article/view/870 Numerical Composition of Differential Privacy 2023-04-06T08:40:28-07:00 Sivakanth Gopi gopisivakanth@gmail.com Yin Tat Lee yintat@uw.edu Lukas Wutschitz lukas.wutschitz@microsoft.com <p>We give a fast algorithm to optimally compose privacy guarantees of differentially private (DP) algorithms to arbitrary accuracy. Our method is based on the notion of \emph{privacy loss random variables} to quantify the privacy loss of DP algorithms.<br>The running time and memory needed for our algorithm to approximate the privacy curve of a DP algorithm composed with itself $k$ times is $\tilde{O}(\sqrt{k})$. This improves over the best prior method by Koskela et al. (2021) which requires $\tilde{\Omega}(k^{1.5})$ running time. We demonstrate the utility of our algorithm by accurately computing the privacy loss of DP-SGD algorithm of Abadi et al. (2016) and showing that our algorithm speeds up the privacy computations by a few orders of magnitude compared to prior work, while maintaining similar accuracy.</p> 2024-02-11T00:00:00-08:00 Copyright (c) 2024 Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz