Oracle performance for visual captioning

Li Yao, Nicolas Ballas, Kyunghyun Cho, John R. Smith, Yoshua Bengio

Research output: Contribution to conferencePaper

Abstract

The task of associating images and videos with a natural language description has attracted a great amount of attention recently. The state-of-the-art results on some of the standard datasets have been pushed into the regime where it has become more and more difficult to make significant improvements. Instead of proposing new models, this work investigates performances that an oracle can obtain. In order to disentangle the contribution from visual model from the language model, our oracle assumes that high-quality visual concept extractor is available and focuses only on the language part. We demonstrate the construction of such oracles on MS-COCO, YouTube2Text and LSMDC (a combination of M-VAD and MPII-MD). Surprisingly, despite the simplicity of the model and the training procedure, we show that current state-of-the-art models fall short when being compared with the learned oracle. Furthermore, it suggests the inability of current models in capturing important visual concepts in captioning tasks.

Original languageEnglish (US)
Pages141.1-141.13
DOIs
StatePublished - 2016
Event27th British Machine Vision Conference, BMVC 2016 - York, United Kingdom
Duration: Sep 19 2016Sep 22 2016

Other

Other27th British Machine Vision Conference, BMVC 2016
CountryUnited Kingdom
CityYork
Period9/19/169/22/16

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this

Yao, L., Ballas, N., Cho, K., Smith, J. R., & Bengio, Y. (2016). Oracle performance for visual captioning. 141.1-141.13. Paper presented at 27th British Machine Vision Conference, BMVC 2016, York, United Kingdom. https://doi.org/10.5244/C.30.141