TY - JOUR
T1 - Sparse Coding with Multi-layer Decoders using Variance Regularization
AU - Evtimova, Katrina
AU - Lecun, Yann
N1 - Publisher Copyright:
© 2022, Transactions on Machine Learning Research. All rights reserved.
PY - 2022/8
Y1 - 2022/8
N2 - Sparse representations of images are useful in many computer vision applications. Sparse coding with an l1 penalty and a learned linear dictionary requires regularization of the dictionary to prevent a collapse in the l1 norms of the codes. Typically, this regularization entails bounding the Euclidean norms of the dictionary’s elements. In this work, we propose a novel sparse coding protocol which prevents a collapse in the codes without the need to regularize the decoder. Our method regularizes the codes directly so that each latent code component has variance greater than a fixed threshold over a set of sparse representations for a given set of inputs. Furthermore, we explore ways to effectively train sparse coding systems with multi-layer decoders since they can model more complex relationships than linear dictionaries. In our experiments with MNIST and natural image patches, we show that decoders learned with our approach have interpretable features both in the linear and multi-layer case. Moreover, we show that sparse autoencoders with multi-layer decoders trained using our variance regularization method produce higher quality reconstructions with sparser representations when compared to autoencoders with linear dictionaries. Additionally, sparse representations obtained with our variance regularization approach are useful in the downstream tasks of denoising and classification in the low-data regime.
AB - Sparse representations of images are useful in many computer vision applications. Sparse coding with an l1 penalty and a learned linear dictionary requires regularization of the dictionary to prevent a collapse in the l1 norms of the codes. Typically, this regularization entails bounding the Euclidean norms of the dictionary’s elements. In this work, we propose a novel sparse coding protocol which prevents a collapse in the codes without the need to regularize the decoder. Our method regularizes the codes directly so that each latent code component has variance greater than a fixed threshold over a set of sparse representations for a given set of inputs. Furthermore, we explore ways to effectively train sparse coding systems with multi-layer decoders since they can model more complex relationships than linear dictionaries. In our experiments with MNIST and natural image patches, we show that decoders learned with our approach have interpretable features both in the linear and multi-layer case. Moreover, we show that sparse autoencoders with multi-layer decoders trained using our variance regularization method produce higher quality reconstructions with sparser representations when compared to autoencoders with linear dictionaries. Additionally, sparse representations obtained with our variance regularization approach are useful in the downstream tasks of denoising and classification in the low-data regime.
UR - http://www.scopus.com/inward/record.url?scp=105000197504&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105000197504&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:105000197504
SN - 2835-8856
VL - 2022 August
JO - Transactions on Machine Learning Research
JF - Transactions on Machine Learning Research
ER -