Object-based transcoding for adaptable video content delivery

Anthony Vetro, Huifang Sun, Yao Wang

Research output: Contribution to journalArticlepeer-review


This paper introduces a new framework for video content delivery that is based on the transcoding of multiple video objects. Generally speaking, transcoding can be defined as the manipulation or conversion of data into another more desirable format. In this paper, we consider manipulations of object-based video content, and more specifically, from one set of bit streams to another. Given the object-based framework, we present a set of new algorithms that are responsible for manipulating the original set of video bit streams. Depending on the particular strategy that is adopted, the transcoder attempts to satisfy network conditions or user requirements in various ways. One of the main contributions of this paper is to discuss the degrees of freedom within an object-based transcoder and demonstrate the flexibility that it has in adapting the content. Two approaches are considered: a dynamic programming approach and an approach that is based on available meta-data. Simulations with these two approaches provide insight regarding the bit allocation among objects and illustrates the tradeoffs that can be made in adapting the content. When certain meta-data about the content is available, we show that bit allocation can be significantly improved, key objects can be identified, and varying the temporal resolution of objects can be considered.

Original languageEnglish (US)
Pages (from-to)387-401
Number of pages15
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number3
StatePublished - Mar 2001


  • Bit-rate reduction
  • MPEG-4
  • MPEG-7
  • Quality of service
  • Transcoding hints
  • Universal access
  • Video objects

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering


Dive into the research topics of 'Object-based transcoding for adaptable video content delivery'. Together they form a unique fingerprint.

Cite this