Abstract
Recurrent neural network is a powerful model that learns temporal patterns in sequential data. For a long time, it was believed that recurrent networks are difficult to train using simple optimizers, such as stochastic gradient descent, due to the so-called vanishing gradient problem. In this paper, we show that learning longer term patterns in real data, such as in natural language, is perfectly possible using gradient descent. This is achieved by using a slight structural modification of the simple recurrent neural network architecture. We encourage some of the hidden units to change their state slowly by making part of the recurrent weight matrix close to identity, thus forming a kind of longer term memory. We evaluate our model on language modeling tasks on benchmark datasets, where we obtain similar performance to the much more complex Long Short Term Memory (LSTM) networks (Hochreiter & Schmidhuber, 1997).
Original language | English (US) |
---|---|
State | Published - 2015 |
Event | 3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States Duration: May 7 2015 → May 9 2015 |
Conference
Conference | 3rd International Conference on Learning Representations, ICLR 2015 |
---|---|
Country/Territory | United States |
City | San Diego |
Period | 5/7/15 → 5/9/15 |
ASJC Scopus subject areas
- Education
- Linguistics and Language
- Language and Linguistics
- Computer Science Applications