Abstract
Deep neural networks with several layers have recently become a highly successful and popular research topic in machine learning due to their excellent performance in many benchmark problems and applications. A key idea in deep learning is to learn not only the nonlinear mapping between the inputs and outputs but also the underlying structure of the data (input) vectors. In this chapter, we first consider problems with training deep networks using backpropagation-type algorithms. After this, we consider various structures used in deep learning, including restricted Boltzmann machines, deep belief networks, deep Boltzmann machines, and nonlinear autoencoders. In the latter part of this chapter, we discuss in more detail the recently developed neural autoregressive distribution estimator and its variants.
Original language | English (US) |
---|---|
Title of host publication | Advances in Independent Component Analysis and Learning Machines |
Publisher | Elsevier |
Pages | 125-142 |
Number of pages | 18 |
ISBN (Electronic) | 9780128028063 |
DOIs | |
State | Published - Jan 1 2015 |
Keywords
- Autoencoders
- Deep belief networks
- Deep boltzmann machines
- Deep learning
- Neural autoregressive distribution estimators
- Neural networks
- Restricted boltzmann machines
- Unsupervised learning
ASJC Scopus subject areas
- Computer Science(all)