Evaluation of mesh-based motion estimation in H.263-like coders

Yao Wang, Jörn Ostermann

Research output: Contribution to journalReview articlepeer-review


In this paper, we present two mesh-based motion estimation algorithms, and evaluate their performance when incorporated in an H.263-like block-based video coder. Both algorithms compute nodal motions in a hierarchical manner. Within each hierarchy level, the first algorithm (HMMA) minimizes the prediction error in the four elements surrounding each node, where the prediction is accomplished by a bilinear mapping. The optimal solution is obtained by a full search within a range defined by the topology of the mesh. The second algorithm (HBMA) minimizes the error in a block surrounding each node, assuming the motion in the block is constant. In both cases, bilinear mapping is used for motion-compensated prediction based on nodal displacements. The two algorithms are compared with an exhaustive block-matching algorithm (EBMA) by evaluating their performances in temporal prediction and in an H.263/TMN4 coder. For prediction only, the HMMA and HBMA algorithms yield visually more satisfactory results, even though the PSNR's of predicted images are on average lower. The coded images also have lower PSNR's at similar bit rates. The coding artifacts are different: while the block-based method leads to more severe block distortions, the mesh-based method experiences some warping artifacts. The HMMA algorithm out-performs HBMA slightly for certain sequences at the expense of higher computational complexity.

Original languageEnglish (US)
Pages (from-to)243-252
Number of pages10
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number3
StatePublished - 1998


  • Mesh-based methods
  • Motion analysis
  • Motion compensation
  • Video coding

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering


Dive into the research topics of 'Evaluation of mesh-based motion estimation in H.263-like coders'. Together they form a unique fingerprint.

Cite this