QCompere @ REPERE 2013

Hervé Bredin, Johann Poignant, Guillaume Fortier, Makarand Tapaswi, Viet Bac Le, Anindya Roy, Claude Barras, Sophie Rosset, Achintya Sarkar, Qian Yang, Hua Gao, Alexis Mignon, Jakob Verbeek, Laurent Besacier, Georges Quénot, Hazim Kemal Ekenel, Rainer Stiefelhagen

Research output: Contribution to journalConference articlepeer-review

Abstract

We describe QCompere consortium submissions to the REPERE 2013 evaluation campaign. The REPERE challenge aims at gathering four communities (face recognition, speaker identification, optical character recognition and named entity detection) towards the same goal: multimodal person recognition in TV broadcast. First, four mono-modal components are introduced (one for each foregoing community) constituting the elementary building blocks of our various submissions. Then, depending on the target modality (speaker or face recognition) and on the task (supervised or unsupervised recognition), four different fusion techniques are introduced: they can be summarized as propagation-, classifier-, rule- or graph-based approaches. Finally, their performance is evaluated on REPERE 2013 test set and their advantages and limitations are discussed.

Original languageEnglish (US)
Pages (from-to)49-54
Number of pages6
JournalCEUR Workshop Proceedings
Volume1012
StatePublished - 2013
Event1st Workshop on Speech, Language and Audio in Multimedia, SLAM 2013 - Marseille, France
Duration: Aug 22 2013Aug 23 2013

Keywords

  • Face recognition
  • Multimodal fusion
  • Named entity detection
  • Speaker identification
  • Video optical character recognition

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'QCompere @ REPERE 2013'. Together they form a unique fingerprint.

Cite this