TY - JOUR
T1 - Open Problem
T2 - 35th Conference on Learning Theory, COLT 2022
AU - Bassily, Raef
AU - Mohri, Mehryar
AU - Suresh, Ananda Theertha
N1 - Funding Information:
This research was done while RB was visiting Google, NY. RB’s research at OSU is supported by NSF Award AF-1908281, NSF Award 2112471, and NSF CAREER Award 2144532.
Publisher Copyright:
© 2022 R. Bassily, M. Mohri & A.T. Suresh.
PY - 2022
Y1 - 2022
N2 - The design of efficient differentially private (DP) learning algorithms with dimension-independent learning guarantees has been one of the central challenges in the field of privacy-preserving machine learning. Existing algorithms either suffer from weak generalization guarantees, restrictive model assumptions, or quite large computation cost. In non-private learning, dimension-independent generalization guarantees based on the notion of confidence margin were shown to be the most informative and useful learning guarantees. This motivates a systematic study of DP learning algorithms with confidence-margin generalization guarantees. A recent work has started exploring this direction in the context of linear and kernel-based classification as well as certain classes of neural networks (NNs). Despite showing several positive results, a number of fundamental questions are still open. We identify two major open problems related to DP margin-based learning algorithms. The first problem relates to the design of algorithms with more favorable computational cost. The second one pertains to the question of achieving margin guarantees for NNs under DP with no explicit dependence on the network size.
AB - The design of efficient differentially private (DP) learning algorithms with dimension-independent learning guarantees has been one of the central challenges in the field of privacy-preserving machine learning. Existing algorithms either suffer from weak generalization guarantees, restrictive model assumptions, or quite large computation cost. In non-private learning, dimension-independent generalization guarantees based on the notion of confidence margin were shown to be the most informative and useful learning guarantees. This motivates a systematic study of DP learning algorithms with confidence-margin generalization guarantees. A recent work has started exploring this direction in the context of linear and kernel-based classification as well as certain classes of neural networks (NNs). Despite showing several positive results, a number of fundamental questions are still open. We identify two major open problems related to DP margin-based learning algorithms. The first problem relates to the design of algorithms with more favorable computational cost. The second one pertains to the question of achieving margin guarantees for NNs under DP with no explicit dependence on the network size.
UR - http://www.scopus.com/inward/record.url?scp=85164725980&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164725980&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85164725980
SN - 2640-3498
VL - 178
SP - 5638
EP - 5643
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 2 July 2022 through 5 July 2022
ER -