Entropy bounds on Bayesian learning

Olivier Gossner, Tristan Tomala

Research output: Contribution to journalArticlepeer-review

Abstract

An observer of a process (xt) believes the process is governed by Q whereas the true law is P. We bound the expected average distance between P (xt | x1, ..., xt - 1) and Q (xt | x1, ..., xt - 1) for t = 1, ..., n by a function of the relative entropy between the marginals of P and Q on the n first realizations. We apply this bound to the cost of learning in sequential decision problems and to the merging of Q to P.

Original languageEnglish (US)
Pages (from-to)24-32
Number of pages9
JournalJournal of Mathematical Economics
Volume44
Issue number1
DOIs
StatePublished - Jan 1 2008

Keywords

  • Bayesian learning
  • Entropy
  • Repeated decision problem
  • Value of information

ASJC Scopus subject areas

  • Economics and Econometrics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Entropy bounds on Bayesian learning'. Together they form a unique fingerprint.

Cite this