TY - GEN

T1 - Learning to be Bayesian without Supervision

AU - Raphan, Martin

AU - Simoncelli, Eero P.

N1 - Funding Information:
This work was partially funded by the Howard Hughes Medical Institute, and by New York University through a McCracken Fellowship to MR.
Publisher Copyright:
© NIPS 2006.All rights reserved

PY - 2006

Y1 - 2006

N2 - Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are assumed to be known. But in many situations, the prior density is not known, and is difficult to learn from data since one does not have access to uncorrupted samples of the variable being estimated. We show that for a wide variety of observation models, the Bayes least squares (BLS) estimator may be formulated without explicit reference to the prior. Specifically, we derive a direct expression for the estimator, and a related expression for the mean squared estimation error, both in terms of the density of the observed measurements. Each of these prior-free formulations allows us to approximate the estimator given a sufficient amount of observed data. We use the first form to develop practical nonparametric approximations of BLS estimators for several different observation processes, and the second form to develop a parametric family of estimators for use in the additive Gaussian noise case. We examine the empirical performance of these estimators as a function of the amount of observed data.

AB - Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are assumed to be known. But in many situations, the prior density is not known, and is difficult to learn from data since one does not have access to uncorrupted samples of the variable being estimated. We show that for a wide variety of observation models, the Bayes least squares (BLS) estimator may be formulated without explicit reference to the prior. Specifically, we derive a direct expression for the estimator, and a related expression for the mean squared estimation error, both in terms of the density of the observed measurements. Each of these prior-free formulations allows us to approximate the estimator given a sufficient amount of observed data. We use the first form to develop practical nonparametric approximations of BLS estimators for several different observation processes, and the second form to develop a parametric family of estimators for use in the additive Gaussian noise case. We examine the empirical performance of these estimators as a function of the amount of observed data.

UR - http://www.scopus.com/inward/record.url?scp=85148982877&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85148982877&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85148982877

T3 - NIPS 2006: Proceedings of the 19th International Conference on Neural Information Processing Systems

SP - 1145

EP - 1152

BT - NIPS 2006

A2 - Scholkopf, Bernhard

A2 - Platt, John C.

A2 - Hofmann, Thomas

PB - MIT Press Journals

T2 - 19th International Conference on Neural Information Processing Systems, NIPS 2006

Y2 - 4 December 2006 through 7 December 2006

ER -