TY - GEN

T1 - Learning languages with rational kernels

AU - Cortes, Corinna

AU - Kontorovich, Leonid

AU - Mohri, Mehryar

PY - 2007

Y1 - 2007

N2 - We present a general study of learning and linear separability with rational kernels, the sequence kernels commonly used in computational biology and natural language processing. We give a characterization of the class of all languages linearly separable with rational kernels and prove several properties of the class of languages linearly separable with a fixed rational kernel. In particular, we show that for kernels with transducer values in a finite set, these languages are necessarily finite Boolean combinations of preimages by a transducer of a single sequence. We also analyze the margin properties of linear separation with rational kernels and show that kernels with transducer values in a finite set guarantee a positive margin and lead to better learning guarantees. Creating a rational kernel with values in a finite set is often non-trivial even for relatively simple cases. However, we present a novel and general algorithm, double-tape disambiguation, that takes as input a transducer mapping sequences to sequence features, and yields an associated transducer that defines a finite range rational kernel. We describe the algorithm in detail and show its application to several cases of interest.

AB - We present a general study of learning and linear separability with rational kernels, the sequence kernels commonly used in computational biology and natural language processing. We give a characterization of the class of all languages linearly separable with rational kernels and prove several properties of the class of languages linearly separable with a fixed rational kernel. In particular, we show that for kernels with transducer values in a finite set, these languages are necessarily finite Boolean combinations of preimages by a transducer of a single sequence. We also analyze the margin properties of linear separation with rational kernels and show that kernels with transducer values in a finite set guarantee a positive margin and lead to better learning guarantees. Creating a rational kernel with values in a finite set is often non-trivial even for relatively simple cases. However, we present a novel and general algorithm, double-tape disambiguation, that takes as input a transducer mapping sequences to sequence features, and yields an associated transducer that defines a finite range rational kernel. We describe the algorithm in detail and show its application to several cases of interest.

UR - http://www.scopus.com/inward/record.url?scp=38049037764&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=38049037764&partnerID=8YFLogxK

U2 - 10.1007/978-3-540-72927-3_26

DO - 10.1007/978-3-540-72927-3_26

M3 - Conference contribution

AN - SCOPUS:38049037764

SN - 9783540729259

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 349

EP - 364

BT - Learning Theory - 20th Annual Conference on Learning Theory, COLT 2007, Proceedings

PB - Springer Verlag

T2 - 20th Annual Conference on Learning Theory, COLT 2007

Y2 - 13 June 2007 through 15 June 2007

ER -