https://journalprivacyconfidentiality.org/index.php/jpc/issue/feedJournal of Privacy and Confidentiality2025-12-31T12:17:47+00:00Rachel Cummingsmanaging-editor@journalprivacyconfidentiality.orgOpen Journal Systems<p>The <em>Journal of Privacy and Confidentiality</em> is an open-access multi-disciplinary journal whose purpose is to facilitate the coalescence of research methodologies and activities in the areas of privacy, confidentiality, and disclosure limitation. The JPC seeks to publish a wide range of research and review papers, not only from academia, but also from government (especially official statistical agencies) and industry, and to serve as a forum for exchange of views, discussion, and news.</p>https://journalprivacyconfidentiality.org/index.php/jpc/article/view/894Avoiding Floating-Point Side Channels in the Report Noisy Max with Gap Mechanism2025-05-14T16:19:57+00:00Zeyu Dingdding1@binghamton.eduJohn Durrelljmd6968@psu.eduDaniel Kiferdkifer@cse.psu.eduProttay Protivashpxp945@psu.eduGuanhong Wangguanhong@umd.eduYuxin Wangyxwang@psu.eduYingtai Xiaoyxx5224@psu.eduDanfeng Zhangzhang@cse.psu.edu<p>The Noisy Max mechanism and its variations are fundamental private selection algorithms that are used to select items from a set of candidates (such as the most common diseases in a population), while controlling the privacy leakage in the underlying data. A recently proposed extension, Noisy Top-k with Gap, provides numerical information about how much better the selected items are compared to the non-selected items (e.g., how much more common are the selected diseases). This extra information comes at no privacy cost but crucially relies on infinite precision for the privacy guarantees. In this paper, we provide a finite-precision secure implementation of this algorithm that takes advantage of integer arithmetic.</p>2025-12-31T00:00:00+00:00Copyright (c) 2025 Zeyu Ding, John Durrell, Daniel Kifer, Prottay Protivash, Guanhong Wang, Yuxin Wang, Yingtai Xiao, Danfeng Zhanghttps://journalprivacyconfidentiality.org/index.php/jpc/article/view/916Achieving Privacy Utility Balance for Multivariate Time Series Data2025-04-28T04:51:19+00:00Gaurab Horegaurabh1@umbc.eduTucker S. McElroytucker.s.mcelroy@census.govAnindya Royanindya@umbc.edu<p>Utility-preserving data privatization is of utmost importance for data-producing agencies. The popular noise-addition privacy mechanism distorts autocorrelation patterns in time series data, thereby marring utility; in response, [21] introduced all-pass filtering (FLIP) as a utility-preserving time series data privatization method. Adapting this concept to multivariate data is more complex, and in this paper we propose a multivariate all-pass (MAP) filtering method, employing an optimization algorithm to achieve the best balance between data utility and privacy protection. To test the effectiveness of our approach, we apply MAP filtering to both simulated and real data, sourced from the U.S. Census Bureau’s Quarterly Workforce Indicator (QWI) dataset.</p>2025-12-31T00:00:00+00:00Copyright (c) 2025 Gaurab Hore, Tucker S. McElroy, Anindya Royhttps://journalprivacyconfidentiality.org/index.php/jpc/article/view/998Bridging the Privacy Accounting Gap in DP-SGD2025-10-06T13:16:07+00:00Lynn Chuachualynn@google.comBadih Ghazibadihghazi@gmail.comCharlie Harrisoncsharrison@google.comEthan Leemanethanleeman@google.comPritish Kamathpritish@alum.mit.eduRavi Kumarravi.k53@gmail.comPasin Manurangsipasin@google.comAmer Sinhaamersinha@google.comChiyuan Zhangchiyuan@google.com<p>Differentially Private Stochastic Gradient Descent (DP-SGD) is one of the most widely used algorithms for private machine learning. Due to its efficiency, most practical implementations of DP-SGD shuffle the training examples and divide them into fixed-size mini-batches during training. However, the privacy accounting typically assumes that Poisson subsampling was used, wherein each example is included in each mini-batch independently with some probability. Our first contribution is to show that there can be a substantial gap between these two versions of DP-SGD; specifically, the privacy accounting implies much stronger privacy guarantees than the implementations actually provide. As our second contribution, we propose two approaches to address this gap: (i) an implementation of Poisson subsampling using the Map-Reduce framework that can scale to large datasets that do not fit in memory and (ii) a novel Balls-and-Bins sampling that achieves the “best of both” sampling approaches. Namely, its implementation is similar to shuffling, and it leads to similar utility for DP-SGD training with similar-or-better privacy compared to Poisson subsampling in practical regimes of parameters.</p>2025-12-31T00:00:00+00:00Copyright (c) 2025 Lynn Chua, Badih Ghazi, Charlie Harrison, Ethan Leeman, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Amer Sinha, Chiyuan Zhang