TY - GEN
T1 - More data means less inference
T2 - 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010
AU - Sontag, David
AU - Meshi, Ofer
AU - Jaakkola, Tommi
AU - Globerson, Amir
PY - 2010
Y1 - 2010
N2 - The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used.
AB - The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used.
UR - http://www.scopus.com/inward/record.url?scp=84860649508&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84860649508&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84860649508
SN - 9781617823800
T3 - Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010
BT - Advances in Neural Information Processing Systems 23
Y2 - 6 December 2010 through 9 December 2010
ER -