TY - GEN

T1 - Continuous LWE

AU - Bruna, Joan

AU - Regev, Oded

AU - Song, Min Jae

AU - Tang, Yi

N1 - Funding Information:
∗This work is partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF 1845360, and the Institute for Advanced Study. †Research supported by the Simons Collaboration on Algorithms and Geometry, a Simons Investigator Award, and by the National Science Foundation (NSF) under Grant No. CCF-1814524. ‡Research supported by the National Science Foundation (NSF) under Grant No. CCF-1814524. §This work was done while the author was at the Courant Institute, New York University.
Publisher Copyright:
© 2021 ACM.

PY - 2021/6/15

Y1 - 2021/6/15

N2 - We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE. We give a polynomial-time quantum reduction from worst-case lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE. Alternatively, our result can also be seen as opening new avenues of (quantum) attacks on lattice problems. Our work resolves an open problem regarding the computational complexity of learning mixtures of Gaussians without separability assumptions (Diakonikolas 2016, Moitra 2018). As an additional motivation, (a slight variant of) CLWE was considered in the context of robust machine learning (Diakonikolas et al.?FOCS 2017), where hardness in the statistical query (SQ) model was shown; our work addresses the open question regarding its computational hardness (Bubeck et al.?ICML 2019).

AB - We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE. We give a polynomial-time quantum reduction from worst-case lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE. Alternatively, our result can also be seen as opening new avenues of (quantum) attacks on lattice problems. Our work resolves an open problem regarding the computational complexity of learning mixtures of Gaussians without separability assumptions (Diakonikolas 2016, Moitra 2018). As an additional motivation, (a slight variant of) CLWE was considered in the context of robust machine learning (Diakonikolas et al.?FOCS 2017), where hardness in the statistical query (SQ) model was shown; our work addresses the open question regarding its computational hardness (Bubeck et al.?ICML 2019).

KW - computational learning theory

KW - cryptography

KW - lattices

KW - Learning with Errors

KW - mixture of Gaussians

KW - quantum computing

UR - http://www.scopus.com/inward/record.url?scp=85108157293&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85108157293&partnerID=8YFLogxK

U2 - 10.1145/3406325.3451000

DO - 10.1145/3406325.3451000

M3 - Conference contribution

AN - SCOPUS:85108157293

T3 - Proceedings of the Annual ACM Symposium on Theory of Computing

SP - 694

EP - 707

BT - STOC 2021 - Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing

A2 - Khuller, Samir

A2 - Williams, Virginia Vassilevska

PB - Association for Computing Machinery

T2 - 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021

Y2 - 21 June 2021 through 25 June 2021

ER -