Private Stochastic Optimization with Large Worst-Case Lipschitz Parameter

Main Article Content

Andrew Lowy
https://orcid.org/0009-0001-9893-5669
Meisam Razaviyayn

Abstract

We study differentially private (DP) stochastic optimization (SO) with loss functions whose worst-case Lipschitz parameter over all data points may be huge or infinite. To date, the most work on DP SO assumes that the loss is uniformly Lipschitz continuous over data (i.e. stochastic gradients are uniformly bounded over all data points). While this assumption is convenient, it often leads to pessimistic excess risk bounds. In practical problems, the worst-case Lipschitz parameter of the loss over all data points may be huge due to outliers and/or heavy-tailed data. In such cases, the error bounds for DP SO, which scale with the worst-case Lipschitz parameter of the loss, are vacuous. To address these limitations, this work provides improved excess risk bounds that do not depend on the worst-case Lipschitz parameter of the loss. Building on a recent line of work (Wang et al., 2020; Kamath et al., 2022), we assume that stochastic gradients have bounded $k$-th order moments for some $k \geq 2$. Compared with works on uniformly Lipschitz DP SO, our excess risk scales with the $k$-th moment bound instead of the worst-case Lipschitz parameter of the loss, allowing for significantly faster rates in the presence of outliers and/or heavy-tailed data.


For smooth convex loss functions, we provide linear-time algorithms with state-of-the-art excess risk. We complement our excess risk upper bounds with novel lower bounds. In certain parameter regimes, our linear-time excess risk bounds are minimax optimal. Second, we provide the first algorithm to handle non-smooth convex loss functions. To do so, we develop novel algorithmic and stability-based proof techniques, which we believe will be useful for future work in obtaining optimal excess risk. Finally, our work is the first to address non-convex non-uniformly Lipschitz loss functions satisfying the Proximal-PL inequality. Our Proximal-PL algorithm has near-optimal excess risk.

Article Details

How to Cite
Lowy, Andrew, and Meisam Razaviyayn. 2025. “Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter”. Journal of Privacy and Confidentiality 15 (1). https://doi.org/10.29012/jpc.909.
Section
TPDP 2023
Author Biography

Meisam Razaviyayn, University of Southern California

Associate Professor and Andrew and Erna Viterbi Early Career Chair, Departments of Industrial and Systems Engineering, Electrical and Computer Engineering, Quantitative and Computational Biology, and Computer Science. Associate Director of the USC-Meta Center for Research and Education in AI and Learning.

Funding data