Abstract
We consider the problem of attribute-efficient learning in query and mistake-bound models. Attribute-efficient algorithms make a number of queries or mistakes that is polynomial in the number of relevant variables in the target function, but only sublinear in the number of irrelevant variables. We consider a variant of the membership query model in which the learning algorithm is given as input the number of relevant variables of the target function. Using a number-theoretic coloring technique, we show that in this model, any class of functions (including parity) that can be learned in polynomial time can be learned attribute-efficiently in polynomial time. We show that this does not hold in the randomized membership query model. In the mistake-bound model, we consider the problem of learning attribute-efficiently using hypotheses that are formulas of small depth. Our results extend the work of Blum et al. and Bshouty et al.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the Annual ACM Conference on Computational Learning Theory |
Editors | Anon |
Pages | 235-243 |
Number of pages | 9 |
State | Published - 1996 |
Event | Proceedings of the 1996 9th Annual Conference on Computational Learning Theory - Desenzano del Garda, Italy Duration: Jun 28 1996 → Jul 1 1996 |
Other
Other | Proceedings of the 1996 9th Annual Conference on Computational Learning Theory |
---|---|
City | Desenzano del Garda, Italy |
Period | 6/28/96 → 7/1/96 |
ASJC Scopus subject areas
- Computational Mathematics