Abstract
We introduce a novel framework for classification with a rejection option that consists of simultaneously learning two functions: a classifier along with a rejection function. We present a full theoretical analysis of this framework including new data-dependent learning bounds in terms of the Rademacher complexities of the classifier and rejection families as well as consistency and calibration results. These theoretical guarantees guide us in designing new algorithms that can exploit different kernel-based hypothesis sets for the classifier and rejection functions. We compare our general framework with the special case of confidence-based rejection for which we also devise alternative loss functions and algorithms. We report the results of several experiments showing that our kernel-based algorithms can yield a notable improvement over the best existing confidence-based rejection algorithm.
Original language | English (US) |
---|---|
Pages (from-to) | 277-315 |
Number of pages | 39 |
Journal | Annals of Mathematics and Artificial Intelligence |
Volume | 92 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2024 |
Keywords
- Abstention
- Confidence-based models
- Kernels
- Rejection
ASJC Scopus subject areas
- Applied Mathematics
- Artificial Intelligence