Emergent translation in multi-agent communication

Jason Lee, Kyunghyun Cho, Jason Weston, Douwe Kiela

Research output: Contribution to conferencePaper

Abstract

While most machine translation systems to date are trained on large parallel corpora, humans learn language in a different way: by being grounded in an environment and interacting with other humans. In this work, we propose a communication game where two agents, native speakers of their own respective languages, jointly learn to solve a visual referential task. We find that the ability to understand and translate a foreign language emerges as a means to achieve shared goals. The emergent translation is interactive and multimodal, and crucially does not require parallel corpora, but only monolingual, independent text and corresponding images. Our proposed translation model achieves this by grounding the source and target languages into a shared visual modality, and outperforms several baselines on both word-level and sentence-level translation tasks. Furthermore, we show that agents in a multilingual community learn to translate better and faster than in a bilingual communication setting.

Original languageEnglish (US)
StatePublished - Jan 1 2018
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: Apr 30 2018May 3 2018

Conference

Conference6th International Conference on Learning Representations, ICLR 2018
CountryCanada
CityVancouver
Period4/30/185/3/18

ASJC Scopus subject areas

  • Language and Linguistics
  • Education
  • Computer Science Applications
  • Linguistics and Language

Fingerprint Dive into the research topics of 'Emergent translation in multi-agent communication'. Together they form a unique fingerprint.

  • Cite this

    Lee, J., Cho, K., Weston, J., & Kiela, D. (2018). Emergent translation in multi-agent communication. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.