Why skewing works: Learning difficult boolean functions with greedy tree learners

Bernard Rosell, Lisa Hellerstein, Soumya Ray, David Page

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    We analyze skewing, an approach that has been empirically observed to enable greedy decision tree learners to learn "difficult" Boolean functions, such as parity, in the presence of irrelevant variables. We prove that, in an idealized setting, for any function and choice of skew parameters, skewing finds relevant variables with probability 1. We present experiments exploring how different parameter choices affect the success of skewing in empirical settings. Finally, we analyze a variant of skewing called Sequential Skewing.

    Original languageEnglish (US)
    Title of host publicationICML 2005 - Proceedings of the 22nd International Conference on Machine Learning
    EditorsL. Raedt, S. Wrobel
    Pages729-736
    Number of pages8
    StatePublished - 2005
    EventICML 2005: 22nd International Conference on Machine Learning - Bonn, Germany
    Duration: Aug 7 2005Aug 11 2005

    Publication series

    NameICML 2005 - Proceedings of the 22nd International Conference on Machine Learning

    Other

    OtherICML 2005: 22nd International Conference on Machine Learning
    CountryGermany
    CityBonn
    Period8/7/058/11/05

    ASJC Scopus subject areas

    • Engineering(all)

    Fingerprint Dive into the research topics of 'Why skewing works: Learning difficult boolean functions with greedy tree learners'. Together they form a unique fingerprint.

  • Cite this

    Rosell, B., Hellerstein, L., Ray, S., & Page, D. (2005). Why skewing works: Learning difficult boolean functions with greedy tree learners. In L. Raedt, & S. Wrobel (Eds.), ICML 2005 - Proceedings of the 22nd International Conference on Machine Learning (pp. 729-736). (ICML 2005 - Proceedings of the 22nd International Conference on Machine Learning).