TY - GEN
T1 - Energy-based models in document recognition and computer vision
AU - LeCun, Yann
AU - Chopra, Sumit
AU - Ranzato, Marc Aurelio
AU - Huang, Fu Jie
PY - 2007
Y1 - 2007
N2 - The Machine Learning and Pattern Recognition communities are facing two challenges: solving the normalization problem, and solving the deep learning problem. The normalization problem is related to the difficulty of training probabilistic models over large spaces while keeping them properly normalized. In recent years, the ML and Natural Language communities have devoted considerable efforts to circumventing this problem by developing "unnormalized" learning models for tasks in which the output is highly structured (e.g. English sentences). This class of models was in fact originally developed during the 90's in the handwriting recognition community, and includes Graph Transformer Networks, Conditional Random Fields, Hidden Markov SVMs, and Maximum Margin Markov Networks. We describe these models within the unifying framework of "Energy-Based Models" (EBM). The Deep Learning Problem is related to the issue of training all the levels of a recognition system (e.g. segmentation, feature extraction, recognition, etc) in an integrated fashion. We first consider "traditional" methods for deep learning, such as convolutional networks and back-propagation, and show that, although they produce very low error rates for handwriting and object recognition, they require many training samples. We show that using unsupervised learning to initialize the layers of a deep network dramatically reduces the required number of training samples, particularly for such tasks as the recognition of everyday objects at the category level.
AB - The Machine Learning and Pattern Recognition communities are facing two challenges: solving the normalization problem, and solving the deep learning problem. The normalization problem is related to the difficulty of training probabilistic models over large spaces while keeping them properly normalized. In recent years, the ML and Natural Language communities have devoted considerable efforts to circumventing this problem by developing "unnormalized" learning models for tasks in which the output is highly structured (e.g. English sentences). This class of models was in fact originally developed during the 90's in the handwriting recognition community, and includes Graph Transformer Networks, Conditional Random Fields, Hidden Markov SVMs, and Maximum Margin Markov Networks. We describe these models within the unifying framework of "Energy-Based Models" (EBM). The Deep Learning Problem is related to the issue of training all the levels of a recognition system (e.g. segmentation, feature extraction, recognition, etc) in an integrated fashion. We first consider "traditional" methods for deep learning, such as convolutional networks and back-propagation, and show that, although they produce very low error rates for handwriting and object recognition, they require many training samples. We show that using unsupervised learning to initialize the layers of a deep network dramatically reduces the required number of training samples, particularly for such tasks as the recognition of everyday objects at the category level.
UR - http://www.scopus.com/inward/record.url?scp=51249093914&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=51249093914&partnerID=8YFLogxK
U2 - 10.1109/ICDAR.2007.4378728
DO - 10.1109/ICDAR.2007.4378728
M3 - Conference contribution
AN - SCOPUS:51249093914
SN - 0769528228
SN - 9780769528229
T3 - Proceedings of the International Conference on Document Analysis and Recognition, ICDAR
SP - 337
EP - 341
BT - Proceedings - 9th International Conference on Document Analysis and Recognition, ICDAR 2007
T2 - 9th International Conference on Document Analysis and Recognition, ICDAR 2007
Y2 - 23 September 2007 through 26 September 2007
ER -