Oracle performance for visual captioning

Li Yao, Nicolas Ballas, Kyunghyun Cho, John R. Smith, Yoshua Bengio

Research output: Contribution to conferencePaperpeer-review


The task of associating images and videos with a natural language description has attracted a great amount of attention recently. The state-of-the-art results on some of the standard datasets have been pushed into the regime where it has become more and more difficult to make significant improvements. Instead of proposing new models, this work investigates performances that an oracle can obtain. In order to disentangle the contribution from visual model from the language model, our oracle assumes that high-quality visual concept extractor is available and focuses only on the language part. We demonstrate the construction of such oracles on MS-COCO, YouTube2Text and LSMDC (a combination of M-VAD and MPII-MD). Surprisingly, despite the simplicity of the model and the training procedure, we show that current state-of-the-art models fall short when being compared with the learned oracle. Furthermore, it suggests the inability of current models in capturing important visual concepts in captioning tasks.

Original languageEnglish (US)
StatePublished - 2016
Event27th British Machine Vision Conference, BMVC 2016 - York, United Kingdom
Duration: Sep 19 2016Sep 22 2016


Other27th British Machine Vision Conference, BMVC 2016
Country/TerritoryUnited Kingdom

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Oracle performance for visual captioning'. Together they form a unique fingerprint.

Cite this