Use of two-dimensional deformable mesh structures for video coding, Part II - The analysis problem and a region-based coder employing an active mesh representation

Yao Wang, Ouseb Lee, Anthony Vetro

Research output: Contribution to journalArticle

Abstract

This paper explores the use of the deformable mesh structure for motion/shape analysis and synthesis in an image sequence. In Part I of this paper, we reviewed theory and techniques developed in the finite element method for function interpolation and mapping using a given mesh structure. This constitutes the synthesis problem in a video coder employing a mesh-based motion model. Here in Part II, we present algorithms for the analysis problem, including scene-adaptive mesh generation and node tracking over successive frames. We also describe a region-based video coder that integrates the analysis and synthesis algorithms presented in this paper. The coder describes each region by an ensemble of connected quadrilateral elements embedded in a mesh structure. For each region, its shape and texture are described by the nodal positions and image functions of the elements in this region in an initial frame, while its motion (including shape deformation) is characterized by the nodal trajectories in the following frames, which are in turn specified by a few motion parameters. This coder has been applied to a typical common intermediate format (CIF) resolution, head-and-shoulder type sequence. The visual quality is significantly better than the H.263-TMN4 algorithm at about 50 Kb/s (for the luminance component only, 30 Hz).

Original languageEnglish (US)
Pages (from-to)647-659
Number of pages13
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume6
Issue number6
DOIs
StatePublished - 1996

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Use of two-dimensional deformable mesh structures for video coding, Part II - The analysis problem and a region-based coder employing an active mesh representation'. Together they form a unique fingerprint.

Cite this