TY - GEN
T1 - What is the best multi-stage architecture for object recognition?
AU - Jarrett, Kevin
AU - Kavukcuoglu, Koray
AU - Ranzato, Marc'Aurelio
AU - LeCun, Yann
N1 - Copyright:
Copyright 2010 Elsevier B.V., All rights reserved.
PY - 2009
Y1 - 2009
N2 - In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode. This paper addresses three questions: 1. How does the non-linearities that follow the filter banks influence the recognition accuracy? 2. does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hard-wired filters? 3. Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks. We show that two stages of feature extraction yield better accuracy than one. Most surprisingly, we show that a two-stage system with random filters can yield almost 63% recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used. Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (5.6%) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (> 65%), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (0.53%).
AB - In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode. This paper addresses three questions: 1. How does the non-linearities that follow the filter banks influence the recognition accuracy? 2. does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hard-wired filters? 3. Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks. We show that two stages of feature extraction yield better accuracy than one. Most surprisingly, we show that a two-stage system with random filters can yield almost 63% recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used. Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (5.6%) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (> 65%), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (0.53%).
UR - http://www.scopus.com/inward/record.url?scp=77953183471&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77953183471&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2009.5459469
DO - 10.1109/ICCV.2009.5459469
M3 - Conference contribution
AN - SCOPUS:77953183471
SN - 9781424444205
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2146
EP - 2153
BT - 2009 IEEE 12th International Conference on Computer Vision, ICCV 2009
T2 - 12th International Conference on Computer Vision, ICCV 2009
Y2 - 29 September 2009 through 2 October 2009
ER -