Journal of Privacy and Confidentiality <p>The <em>Journal of Privacy and Confidentiality</em>&nbsp;is an open-access multi-disciplinary journal whose purpose is to facilitate the coalescence of research methodologies and activities in the areas of privacy, confidentiality, and disclosure limitation. The JPC seeks to publish a wide range of research and review papers, not only from academia, but also from government (especially official statistical agencies) and industry, and to serve as a forum for exchange of views, discussion, and news.</p> Cornell University, ILR School en-US Journal of Privacy and Confidentiality 2575-8527 <p>Copyright is retained by the authors. By submitting to this journal, the author(s) license the article under the <a href="">Creative Commons License – Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)</a>, unless choosing a more lenient license (for instance, public domain). For situations not allowed under CC BY-NC-ND, short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source.</p> <p>Authors of articles published by the journal grant the journal the right to store the articles in its databases for an unlimited period of time and to distribute and reproduce the articles electronically.</p> Developing access to confidential data in France: results and new challenges <div><span lang="EN-US">Over the past twenty years, in various countries, secure access to data for the members of the research community was eased in a significant manner. Such data involve microdata and granular data from administrative records and detailed individual surveys. While some difficulties remain, the scene has been extensively redesigned, and new players emerged on both sides of the fence: data holders and users, both challenging what seemed to be well-established boundaries. </span><span lang="EN-US">In the French case, access to confidential data for research purposes has been carefully facilitated. The paper analyses the French developments and current achievements, providing insights into how obstacles can be overcome and newly emerging issues</span></div> Roxane SILBERMAN Copyright (c) 2021 Roxane SILBERMAN 2021-09-01 2021-09-01 11 2 10.29012/jpc.788 Research Data Centres - a Regulator's Perspective Michael Harvey Copyright (c) 2021 Michael Harvey 2021-09-01 2021-09-01 11 2 10.29012/jpc.769 DPSyn: Experiences in the NIST Differential Privacy Data Synthesis Challenges <p><span style="left: 229.237px; top: 408.29px; font-size: 11.6182px; font-family: sans-serif; transform: scaleX(0.975938);">We summarize the experience of participating in two differential privacy</span><span style="left: 163.09px; top: 422.491px; font-size: 11.6182px; font-family: sans-serif; transform: scaleX(0.984842);">competitions organized by the National Institute of Standards and Technology (NIST). In</span><span style="left: 163.09px; top: 436.691px; font-size: 11.6182px; font-family: sans-serif; transform: scaleX(0.938847);">this paper, we document our experiences in the competition, the approaches we have used,</span><span style="left: 163.09px; top: 450.891px; font-size: 11.6182px; font-family: sans-serif; transform: scaleX(0.962514);">the lessons we have learned, and our call to the research community to further bridge the</span><span style="left: 163.09px; top: 465.091px; font-size: 11.6182px; font-family: sans-serif; transform: scaleX(0.963318);">gap between theory and practice in DP research.</span></p> Tianhao Wang Ninghui Li Zhikun Zhang Copyright (c) 2021 Tianhao Wang, Ninghui Li, Zhikun Zhang 2021-09-01 2021-09-01 11 2 10.29012/jpc.775 Differentially private false discovery rate control <p class="p1">Differential privacy provides a rigorous framework for privacy-preserving data analysis. This paper proposes the first differentially private procedure for controlling the false discovery rate (FDR) in multiple hypothesis testing. Inspired by the Benjamini-Hochberg procedure (BHq), our approach is to first repeatedly add noise to the logarithms of the p-values to ensure differential privacy and to select an approximately smallest p-value serving as a promising candidate at each iteration; the selected p-values are further supplied to the BHq and our private procedure releases only the rejected ones. Moreover, we develop a new technique that is based on a backward submartingale for proving FDR control of a broad class of multiple testing procedures, including our private procedure, and both the BHq step- up and step-down procedures. As a novel aspect, the proof works for arbitrary dependence between the true null and false null test statistics, while FDR control is maintained up to a small multiplicative factor.</p> Cynthia Dwork Weijie Su Li Zhang Copyright (c) 2021 Cynthia Dwork, Weijie Su, Li Zhang 2021-09-01 2021-09-01 11 2 10.29012/jpc.755 Editorial: Articles, perspectives, and TPDP <p>The present issue provides a diverse selection of articles. We introduced a new type of article, “Perspectives,” in the previous issue, and continue with two such articles in the current issue, both drawn again from presentations made at the October 2020 Canadian Research Data Centre Network (CRDCN) conference. We also have a new article on the topic of “Privacy Challenges,” as well as the first of several journal versions of contributions to TPDP 2020. We open with a regular article on the topic of "Differentially private false discovery rate control."</p> Lars Vilhuber Copyright (c) 2021 Lars Vilhuber 2021-09-01 2021-09-01 11 2 10.29012/jpc.801 Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints <p>Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest.</p> <p>Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed by Smith, Thakurta, and Upadhyay (IEEE Symposium on Security and Privacy 2017) and Daniely and Feldman (NeurIPS 2019). Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive statistical queries. </p> Yuval Dagan Vitaly Feldman Copyright (c) 2021 Yuval Dagan, Vitaly Feldman 2021-09-01 2021-09-01 11 2 10.29012/jpc.781