Journal of Privacy and Confidentiality https://journalprivacyconfidentiality.org/index.php/jpc <p>The <em>Journal of Privacy and Confidentiality</em>&nbsp;is an open-access multi-disciplinary journal whose purpose is to facilitate the coalescence of research methodologies and activities in the areas of privacy, confidentiality, and disclosure limitation. The JPC seeks to publish a wide range of research and review papers, not only from academia, but also from government (especially official statistical agencies) and industry, and to serve as a forum for exchange of views, discussion, and news.</p> Society for Privacy and Confidentiality Research, Philadelphia, PA, USA en-US Journal of Privacy and Confidentiality 2575-8527 <p>Copyright is retained by the authors. By submitting to this journal, the author(s) license the article under the <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Creative Commons License – Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)</a>, unless choosing a more lenient license (for instance, public domain). For situations not allowed under CC BY-NC-ND, short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.</p> <p>Authors of articles published by the journal grant the journal the right to store the articles in its databases for an unlimited period of time and to distribute and reproduce the articles electronically.</p> Avoiding Floating-Point Side Channels in the Report Noisy Max with Gap Mechanism https://journalprivacyconfidentiality.org/index.php/jpc/article/view/894 <p>The Noisy Max mechanism and its variations are fundamental private selection algorithms that are used to select items from a set of candidates (such as the most common diseases in a population), while controlling the privacy leakage in the underlying data. A recently proposed extension, Noisy Top-k with Gap, provides numerical information about how much better the selected items are compared to the non-selected items (e.g., how much more common are the selected diseases). This extra information comes at no privacy cost but crucially relies on infinite precision for the privacy guarantees. In this paper, we provide a finite-precision secure implementation of this algorithm that takes advantage of integer arithmetic.</p> Zeyu Ding John Durrell Daniel Kifer Prottay Protivash Guanhong Wang Yuxin Wang Yingtai Xiao Danfeng Zhang Copyright (c) 2025 Zeyu Ding, John Durrell, Daniel Kifer, Prottay Protivash, Guanhong Wang, Yuxin Wang, Yingtai Xiao, Danfeng Zhang https://creativecommons.org/licenses/by-nc-nd/4.0 2025-12-31 2025-12-31 15 3 10.29012/jpc.894 Achieving Privacy Utility Balance for Multivariate Time Series Data https://journalprivacyconfidentiality.org/index.php/jpc/article/view/916 <p>Utility-preserving data privatization is of utmost importance for data-producing agencies. The popular noise-addition privacy mechanism distorts autocorrelation patterns in time series data, thereby marring utility; in response, [21] introduced all-pass filtering (FLIP) as a utility-preserving time series data privatization method. Adapting this concept to multivariate data is more complex, and in this paper we propose a multivariate all-pass (MAP) filtering method, employing an optimization algorithm to achieve the best balance between data utility and privacy protection. To test the effectiveness of our approach, we apply MAP filtering to both simulated and real data, sourced from the U.S. Census Bureau’s Quarterly Workforce Indicator (QWI) dataset.</p> Gaurab Hore Tucker S. McElroy Anindya Roy Copyright (c) 2025 Gaurab Hore, Tucker S. McElroy, Anindya Roy https://creativecommons.org/licenses/by-nc-nd/4.0 2025-12-31 2025-12-31 15 3 10.29012/jpc.916 Bridging the Privacy Accounting Gap in DP-SGD https://journalprivacyconfidentiality.org/index.php/jpc/article/view/998 <p>Differentially Private Stochastic Gradient Descent (DP-SGD) is one of the most widely used algorithms for private machine learning. Due to its efficiency, most practical implementations of DP-SGD shuffle the training examples and divide them into fixed-size mini-batches during training. However, the privacy accounting typically assumes that Poisson subsampling was used, wherein each example is included in each mini-batch independently with some probability. Our first contribution is to show that there can be a substantial gap between these two versions of DP-SGD; specifically, the privacy accounting implies much stronger privacy guarantees than the implementations actually provide. As our second contribution, we propose two approaches to address this gap: (i) an implementation of Poisson subsampling using the Map-Reduce framework that can scale to large datasets that do not fit in memory and (ii) a novel Balls-and-Bins sampling that achieves the “best of both” sampling approaches. Namely, its implementation is similar to shuffling, and it leads to similar utility for DP-SGD training with similar-or-better privacy compared to Poisson subsampling in practical regimes of parameters.</p> Lynn Chua Badih Ghazi Charlie Harrison Ethan Leeman Pritish Kamath Ravi Kumar Pasin Manurangsi Amer Sinha Chiyuan Zhang Copyright (c) 2025 Lynn Chua, Badih Ghazi, Charlie Harrison, Ethan Leeman, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Amer Sinha, Chiyuan Zhang https://creativecommons.org/licenses/by-nc-nd/4.0 2025-12-31 2025-12-31 15 3 10.29012/jpc.998