Major cast detection in video using both speaker and face information

Zhu Liu, Yao Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Major casts, for example, the anchor persons or reporters in news broadcast programs and the principle characters in movies, play an important role in video, and their occurrences provide meaningful indices for organizing and presenting video content. This paper describes a new approach for automatically generating a list of major casts in a video sequence based on multiple modalities, specifically, speaker information in audio track and face information in video track. The core algorithm is composed of three steps. First, speaker boundaries are detected and speaker segments are clustered in audio stream. Second, face appearances are tracked and face tracks are clustered in video stream. Finally, correspondences between speakers and faces are determined based on their temporal co-occurrence. A list of major casts is constructed and ranked in an order that reflects each cast's importance, which is determined by the accumulative temporal and spatial presence of the cast. The proposed algorithm has been integrated in a major cast based video browsing system, which presents the face icon and marks the speech locations in time stream for each detected major cast. The system provides a semantically meaningful summary of the video content, which helps the user to effectively digest the theme of the video.

Original languageEnglish (US)
Pages (from-to)89-101
Number of pages13
JournalIEEE Transactions on Multimedia
Volume9
Issue number1
DOIs
StatePublished - Jan 2007

Keywords

  • Content-based multimedia indexing
  • Face detection
  • Major cast detection
  • Media integration
  • Speaker segmentation
  • Video browsing
  • Video summary

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Major cast detection in video using both speaker and face information'. Together they form a unique fingerprint.

Cite this