The Sample Complexity of Distribution-Free Parity Learning in the Robust Shuffle Model

Main Article Content

Kobbi Nissim
https://orcid.org/0000-0002-6632-8645
Chao Yan
https://orcid.org/0000-0001-6482-6643

Abstract




We provide a lowerbound on the sample complexity of distribution-free parity learning in the realizable case in the shuffle model of differential privacy. Namely, we show that the sample complexity of learning d-bit parity functions is Ω(2d/2). Our result extends a recent similar lowerbound on the sample complexity of private agnostic learning of parity functions in the shuffle model by Cheu and Ullman . We also sketch a simple shuffle model protocol demon- strating that our results are tight up to poly(d) factors.




Article Details

How to Cite
Nissim, Kobbi, and Chao Yan. 2022. “The Sample Complexity of Distribution-Free Parity Learning in the Robust Shuffle Model”. Journal of Privacy and Confidentiality 12 (2). https://doi.org/10.29012/jpc.805.
Section
TPDP 2021