Context-dependent word representation for neural machine translation

Heeyoul Choi, Kyunghyun Cho, Yoshua Bengio

Research output: Contribution to journalArticlepeer-review

Abstract

We first observe a potential weakness of continuous vector representations of symbols in neural machine translation. That is, the continuous vector representation, or a word embedding vector, of a symbol encodes multiple dimensions of similarity, equivalent to encoding more than one meaning of the word. This has the consequence that the encoder and decoder recurrent networks in neural machine translation need to spend substantial amount of their capacity in disambiguating source and target words based on the context which is defined by a source sentence. Based on this observation, in this paper we propose to contextualize the word embedding vectors using a nonlinear bag-of-words representation of the source sentence. Additionally, we propose to represent special tokens (such as numbers, proper nouns and acronyms) with typed symbols to facilitate translating those words that are not well-suited to be translated via continuous vectors. The experiments on En–Fr and En–De reveal that the proposed approaches of contextualization and symbolization improves the translation quality of neural machine translation systems significantly.

Original languageEnglish (US)
Pages (from-to)149-160
Number of pages12
JournalComputer Speech and Language
Volume45
DOIs
StatePublished - Sep 2017

Keywords

  • Contextualization
  • Neural machine translation
  • Symbolization

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Context-dependent word representation for neural machine translation'. Together they form a unique fingerprint.

Cite this