TY - GEN
T1 - Rényi Entropy Bounds on the Active Learning Cost-Performance Tradeoff
AU - Jamali, Vahid
AU - Tulino, Antonia
AU - Llorca, Jaime
AU - Erkip, Elza
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/6
Y1 - 2020/6
N2 - Semi-supervised classification, one of the most prominent fields in machine learning, studies how to combine the statistical knowledge of the often abundant unlabeled data with the often limited labeled data in order to maximize overall classification accuracy. In this context, the process of actively choosing the data to be labeled is referred to as active learning. In this paper, we initiate the non-asymptotic analysis of the optimal policy for semi-supervised classification with actively obtained labeled data. Considering a general Bayesian classification model, we provide the first characterization of the jointly optimal active learning and semi-supervised classification policy, in terms of the cost-performance tradeoff driven by the label query budget (number of data items to be labeled) and overall classification accuracy. Leveraging recent results on the Rényi Entropy, we derive tight information-theoretic bounds on such active learning cost-performance tradeoff.
AB - Semi-supervised classification, one of the most prominent fields in machine learning, studies how to combine the statistical knowledge of the often abundant unlabeled data with the often limited labeled data in order to maximize overall classification accuracy. In this context, the process of actively choosing the data to be labeled is referred to as active learning. In this paper, we initiate the non-asymptotic analysis of the optimal policy for semi-supervised classification with actively obtained labeled data. Considering a general Bayesian classification model, we provide the first characterization of the jointly optimal active learning and semi-supervised classification policy, in terms of the cost-performance tradeoff driven by the label query budget (number of data items to be labeled) and overall classification accuracy. Leveraging recent results on the Rényi Entropy, we derive tight information-theoretic bounds on such active learning cost-performance tradeoff.
UR - http://www.scopus.com/inward/record.url?scp=85090406077&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090406077&partnerID=8YFLogxK
U2 - 10.1109/ISIT44484.2020.9174162
DO - 10.1109/ISIT44484.2020.9174162
M3 - Conference contribution
AN - SCOPUS:85090406077
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 2807
EP - 2812
BT - 2020 IEEE International Symposium on Information Theory, ISIT 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Symposium on Information Theory, ISIT 2020
Y2 - 21 July 2020 through 26 July 2020
ER -