How to construct deep recurrent neural networks: Proceedings of the Second International Conference on Learning Representations (ICLR 2014)

Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio

Research output: Contribution to conferencePaperpeer-review

Abstract

In this paper, we explore different ways to extend a recurrent neural network (RNN) to a deep RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs.

Original languageEnglish (US)
StatePublished - Jan 1 2014
Event2nd International Conference on Learning Representations, ICLR 2014 - Banff, Canada
Duration: Apr 14 2014Apr 16 2014

Conference

Conference2nd International Conference on Learning Representations, ICLR 2014
Country/TerritoryCanada
CityBanff
Period4/14/144/16/14

ASJC Scopus subject areas

  • Linguistics and Language
  • Language and Linguistics
  • Education
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'How to construct deep recurrent neural networks: Proceedings of the Second International Conference on Learning Representations (ICLR 2014)'. Together they form a unique fingerprint.

Cite this