Multiple-description video coding using motion-compensated temporal prediction

Amy R. Reibman, Hamid Jafarkhani, Yao Wang, Michael T. Orchard, Rohit Puri

Research output: Contribution to journalArticlepeer-review

Abstract

We propose multiple description (MD) video coders which use motion-compensated predictions. Our MD video coders utilize MD transform coding and three separate prediction paths at the encoder to mimic the three possible scenarios at the decoder: both descriptions received or either of the single descriptions received. We provide three different algorithms to control the mismatch between the prediction loops at the encoder and decoder. We present simulation results comparing the three approaches to two standards-based approaches to MD video coding. We show that when the main prediction loop at the encoder uses a two-channel reconstruction, it is important to have side prediction loops and transmit some redundancy information to control mismatch. We also examine the performance of our MD video coder with partial mismatch control in the presence of random packet loss, and demonstrate a significant improvement compared to more traditional approaches.

Original languageEnglish (US)
Pages (from-to)193-204
Number of pages12
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume12
Issue number3
DOIs
StatePublished - Mar 2002

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Multiple-description video coding using motion-compensated temporal prediction'. Together they form a unique fingerprint.

Cite this