Unsupervised deep learning: A short review

Juha Karhunen, Tapani Raiko, Kyung Hyun Cho

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Deep neural networks with several layers have recently become a highly successful and popular research topic in machine learning due to their excellent performance in many benchmark problems and applications. A key idea in deep learning is to learn not only the nonlinear mapping between the inputs and outputs but also the underlying structure of the data (input) vectors. In this chapter, we first consider problems with training deep networks using backpropagation-type algorithms. After this, we consider various structures used in deep learning, including restricted Boltzmann machines, deep belief networks, deep Boltzmann machines, and nonlinear autoencoders. In the latter part of this chapter, we discuss in more detail the recently developed neural autoregressive distribution estimator and its variants.

Original languageEnglish (US)
Title of host publicationAdvances in Independent Component Analysis and Learning Machines
PublisherElsevier
Pages125-142
Number of pages18
ISBN (Electronic)9780128028063
DOIs
StatePublished - Jan 1 2015

Keywords

  • Autoencoders
  • Deep belief networks
  • Deep boltzmann machines
  • Deep learning
  • Neural autoregressive distribution estimators
  • Neural networks
  • Restricted boltzmann machines
  • Unsupervised learning

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Unsupervised deep learning: A short review'. Together they form a unique fingerprint.

Cite this