Journal of Privacy and Confidentiality
https://journalprivacyconfidentiality.org/index.php/jpc
<p>The <em>Journal of Privacy and Confidentiality</em> is an openaccess multidisciplinary journal whose purpose is to facilitate the coalescence of research methodologies and activities in the areas of privacy, confidentiality, and disclosure limitation. The JPC seeks to publish a wide range of research and review papers, not only from academia, but also from government (especially official statistical agencies) and industry, and to serve as a forum for exchange of views, discussion, and news.</p>
Cornell University, ILR School
enUS
Journal of Privacy and Confidentiality
25758527
<p>Copyright is retained by the authors. By submitting to this journal, the author(s) license the article under the <a href="https://creativecommons.org/licenses/byncnd/4.0/">Creative Commons License – AttributionNonCommercialNoDerivatives 4.0 International (CC BYNCND 4.0)</a>, unless choosing a more lenient license (for instance, public domain). For situations not allowed under CC BYNCND, short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.</p> <p>Authors of articles published by the journal grant the journal the right to store the articles in its databases for an unlimited period of time and to distribute and reproduce the articles electronically.</p>

The Bounded Gaussian Mechanism for Differential Privacy
https://journalprivacyconfidentiality.org/index.php/jpc/article/view/850
<div class="textLayer"><span dir="ltr" role="presentation">The Gaussian mechanism is one differential privacy mechanism commonly </span><span dir="ltr" role="presentation">used to protect numerical data. However, it may be illsuited to some applications because </span><span dir="ltr" role="presentation">it has unbounded support and thus can produce invalid numerical answers to queries, such </span><span dir="ltr" role="presentation">as negative ages or human heights in the tens of meters. One can project such private </span><span dir="ltr" role="presentation">values onto valid ranges of data, though such projections lead to the accumulation of private </span><span dir="ltr" role="presentation">query responses at the boundaries of such ranges, thereby harming accuracy. Motivated </span><span dir="ltr" role="presentation">by the need for both privacy and accuracy over bounded domains, we present a bounded </span><span dir="ltr" role="presentation">Gaussian mechanism for differential privacy, which has support only on a given region. </span><span dir="ltr" role="presentation">We present both univariate and multivariate versions of this mechanism and illustrate a </span><span dir="ltr" role="presentation">significant reduction in variance relative to comparable existing work.</span></div>
Bo Chen
Matthew Hale
Copyright (c) 2024 Bo Chen, Matthew Hale
http://creativecommons.org/licenses/byncnd/4.0
20240211
20240211
14 1
10.29012/jpc.850

Differentially Private Guarantees for Analytics and Machine Learning on Graphs: A Survey of Results
https://journalprivacyconfidentiality.org/index.php/jpc/article/view/820
<p>We study the applications of differential privacy (DP) in the context of graphstructured data and discuss the formulations of DP applicable to the publication of graphs and their associated statistics as well as machine learning on graphbased data, including graph neural networks (GNNs). Interpreting DP guarantees in the context of graphstructured data can be challenging, as individual data points are interconnected (often nonlinearly or sparsely). This connectivity complicates the computation of individual privacy loss in differentially private learning. The problem is exacerbated by an absence of a single, wellestablished formulation of DP in graph settings. This issue extends to the domain of GNNs, rendering private machine learning on graphstructured data a challenging task. A lack of prior systematisation work motivated us to study graphbased learning from a privacy perspective. In this work, we systematise different formulations of DP on graphs, discuss challenges and promising applications, including the GNN domain. We compare and separate works into graph analytics tasks and graph learning tasks with GNNs. We conclude our work with a discussion of open questions and potential directions for further research in this area.</p>
Tamara T. Mueller
Dmitrii Usynin
Johannes C. Paetzold
Rickmer Braren
Daniel Rueckert
Georgios Kaissis
Copyright (c) 2024 Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
http://creativecommons.org/licenses/byncnd/4.0
20240211
20240211
14 1
10.29012/jpc.820

Exact Privacy Analysis of the Gaussian Sparse Histogram Mechanism
https://journalprivacyconfidentiality.org/index.php/jpc/article/view/823
<p>Sparse histogram methods can be useful for returning differentially private counts of items in large or infinite histograms, large groupby queries, and more generally, releasing a set of statistics with sufficient item counts. We consider the Gaussian version of the sparse histogram mechanism and study the exact epsilon, delta differential privacy guarantees satisfied by this mechanism. We compare these exact epsilon, delta parameters to the simpler overestimates used in prior work to quantify the impact of their looser privacy bounds.</p>
Arjun Wilkins
Daniel Kifer
Danfeng Zhang
Brian Karrer
Copyright (c) 2024 Arjun Wilkins, Daniel Kifer, Danfeng Zhang, Brian Karrer
http://creativecommons.org/licenses/byncnd/4.0
20240211
20240211
14 1
10.29012/jpc.823

Private Convex Optimization via Exponential Mechanism
https://journalprivacyconfidentiality.org/index.php/jpc/article/view/869
<p>In this paper, we study private optimization problems for nonsmooth convex functions $F(x)=\mathbb{E}_i f_i(x)$ on $\mathbb{R}^d$.<br>We show that modifying the exponential mechanism by adding an $\ell_2^2$ regularizer to $F(x)$ and sampling from $\pi(x)\propto \exp(k(F(x)+\mu\x\_2^2/2))$ recovers both the known optimal empirical risk and population loss under $(\eps,\delta)$DP. Furthermore, we show how to implement this mechanism using $\widetilde{O}(n \min(d, n))$ queries to $f_i(x)$ for the DPSCO where $n$ is the number of samples/users and $d$ is the ambient dimension.<br>We also give a (nearly) matching lower bound $\widetilde{\Omega}(n \min(d, n))$ on the number of evaluation queries.</p> <p>Our results utilize the following tools that are of independent interest:<br>\begin{itemize}<br>\item We prove Gaussian Differential Privacy (GDP) of the exponential mechanism if the loss function is strongly convex and the perturbation is Lipschitz. Our privacy bound is \emph{optimal} as it includes the privacy of Gaussian mechanism as a special case and is proved using the isoperimetric inequality for strongly logconcave measures.<br>\item We show how to sample from $\exp(F(x)\mu \x\^2_2/2)$ for $G$Lipschitz $F$ with $\eta$ error in total variation (TV) distance using $\widetilde{O}((G^2/\mu) \log^2(d/\eta))$ unbiased queries to $F(x)$. This is the first sampler whose query complexity has \emph{polylogarithmic dependence} on both dimension $d$ and accuracy $\eta$.<br>\end{itemize}</p>
Sivakanth Gopi
Yin Tat Lee
Daogao Liu
Copyright (c) 2024 Sivakanth Gopi, Yin Tat Lee, Daogao Liu
http://creativecommons.org/licenses/byncnd/4.0
20240211
20240211
14 1
10.29012/jpc.869

Numerical Composition of Differential Privacy
https://journalprivacyconfidentiality.org/index.php/jpc/article/view/870
<p>We give a fast algorithm to optimally compose privacy guarantees of differentially private (DP) algorithms to arbitrary accuracy. Our method is based on the notion of \emph{privacy loss random variables} to quantify the privacy loss of DP algorithms.<br>The running time and memory needed for our algorithm to approximate the privacy curve of a DP algorithm composed with itself $k$ times is $\tilde{O}(\sqrt{k})$. This improves over the best prior method by Koskela et al. (2021) which requires $\tilde{\Omega}(k^{1.5})$ running time. We demonstrate the utility of our algorithm by accurately computing the privacy loss of DPSGD algorithm of Abadi et al. (2016) and showing that our algorithm speeds up the privacy computations by a few orders of magnitude compared to prior work, while maintaining similar accuracy.</p>
Sivakanth Gopi
Yin Tat Lee
Lukas Wutschitz
Copyright (c) 2024 Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz
http://creativecommons.org/licenses/byncnd/4.0
20240211
20240211
14 1
10.29012/jpc.870