TY - JOUR
T1 - Major cast detection in video using both speaker and face information
AU - Liu, Zhu
AU - Wang, Yao
N1 - Funding Information:
Manuscript received August 31, 2001; revised April 25, 2006. This work was supported in part by the National Science Foundation through its STIMULATE program under Grant IRI-9619114. This work was previously presented at ICASSP’2001, Salt Lake City, UT, May 2001. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Sankar Basu.
PY - 2007/1
Y1 - 2007/1
N2 - Major casts, for example, the anchor persons or reporters in news broadcast programs and the principle characters in movies, play an important role in video, and their occurrences provide meaningful indices for organizing and presenting video content. This paper describes a new approach for automatically generating a list of major casts in a video sequence based on multiple modalities, specifically, speaker information in audio track and face information in video track. The core algorithm is composed of three steps. First, speaker boundaries are detected and speaker segments are clustered in audio stream. Second, face appearances are tracked and face tracks are clustered in video stream. Finally, correspondences between speakers and faces are determined based on their temporal co-occurrence. A list of major casts is constructed and ranked in an order that reflects each cast's importance, which is determined by the accumulative temporal and spatial presence of the cast. The proposed algorithm has been integrated in a major cast based video browsing system, which presents the face icon and marks the speech locations in time stream for each detected major cast. The system provides a semantically meaningful summary of the video content, which helps the user to effectively digest the theme of the video.
AB - Major casts, for example, the anchor persons or reporters in news broadcast programs and the principle characters in movies, play an important role in video, and their occurrences provide meaningful indices for organizing and presenting video content. This paper describes a new approach for automatically generating a list of major casts in a video sequence based on multiple modalities, specifically, speaker information in audio track and face information in video track. The core algorithm is composed of three steps. First, speaker boundaries are detected and speaker segments are clustered in audio stream. Second, face appearances are tracked and face tracks are clustered in video stream. Finally, correspondences between speakers and faces are determined based on their temporal co-occurrence. A list of major casts is constructed and ranked in an order that reflects each cast's importance, which is determined by the accumulative temporal and spatial presence of the cast. The proposed algorithm has been integrated in a major cast based video browsing system, which presents the face icon and marks the speech locations in time stream for each detected major cast. The system provides a semantically meaningful summary of the video content, which helps the user to effectively digest the theme of the video.
KW - Content-based multimedia indexing
KW - Face detection
KW - Major cast detection
KW - Media integration
KW - Speaker segmentation
KW - Video browsing
KW - Video summary
UR - http://www.scopus.com/inward/record.url?scp=33846216333&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33846216333&partnerID=8YFLogxK
U2 - 10.1109/TMM.2006.886360
DO - 10.1109/TMM.2006.886360
M3 - Article
AN - SCOPUS:33846216333
SN - 1520-9210
VL - 9
SP - 89
EP - 101
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
IS - 1
ER -