Show, attend and tell: Neural image caption generation with visual attention

Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.

Original languageEnglish (US)
Title of host publication32nd International Conference on Machine Learning, ICML 2015
EditorsFrancis Bach, David Blei
PublisherInternational Machine Learning Society (IMLS)
Pages2048-2057
Number of pages10
ISBN (Electronic)9781510810587
StatePublished - 2015
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: Jul 6 2015Jul 11 2015

Publication series

Name32nd International Conference on Machine Learning, ICML 2015
Volume3

Other

Other32nd International Conference on Machine Learning, ICML 2015
Country/TerritoryFrance
CityLile
Period7/6/157/11/15

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Show, attend and tell: Neural image caption generation with visual attention'. Together they form a unique fingerprint.

Cite this