TY - GEN
T1 - New results for learning noisy parities and halfspaces
AU - Feldman, Vitaly
AU - Gopalan, Parikshit
AU - Khot, Subhash
AU - Ponnuswami, Ashok Kumar
PY - 2006
Y1 - 2006
N2 - We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over ℚn with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2 - ∈, improving the factor of 85/84 - ∈ due to Bshouty and Burroughs 181. Finally, we show that majorities of halfspaces are hard to PAC-leam using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.
AB - We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over ℚn with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2 - ∈, improving the factor of 85/84 - ∈ due to Bshouty and Burroughs 181. Finally, we show that majorities of halfspaces are hard to PAC-leam using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.
UR - http://www.scopus.com/inward/record.url?scp=34547698378&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34547698378&partnerID=8YFLogxK
U2 - 10.1109/FOCS.2006.51
DO - 10.1109/FOCS.2006.51
M3 - Conference contribution
AN - SCOPUS:34547698378
SN - 0769527205
SN - 9780769527208
T3 - Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
SP - 563
EP - 572
BT - 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006
T2 - 47th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2006
Y2 - 21 October 2006 through 24 October 2006
ER -