Stable and effective trainable greedy decoding for sequence to sequence learning

Yun Chen, Kyunghyun Cho, Samuel R. Bowman, Victor O.K. Li

Research output: Contribution to conferencePaper

Abstract

We introduce a fast, general method to manipulate the behavior of the decoder in a sequence to sequence neural network model. We propose a small neural network actor that observes and manipulates the hidden state of a previously-trained decoder. We evaluate our model on the task of neural machine translation. In this task, we use beam search to decode sentences from the plain decoder for each training set input, rank them by BLEU score, and train the actor to encourage the decoder to generate the highest-BLEU output in a single greedy decoding operation without beam search. Experiments on several datasets and models show that our method yields substantial improvements in both translation quality and translation speed over its base system, with no additional data.

Original languageEnglish (US)
StatePublished - Jan 1 2018
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: Apr 30 2018May 3 2018

Conference

Conference6th International Conference on Learning Representations, ICLR 2018
CountryCanada
CityVancouver
Period4/30/185/3/18

    Fingerprint

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Chen, Y., Cho, K., Bowman, S. R., & Li, V. O. K. (2018). Stable and effective trainable greedy decoding for sequence to sequence learning. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.