Block-based Learned Image Coding with Convolutional Autoencoder and Intra-Prediction Aided Entropy Coding

Zhongzheng Yuan, Haojie Liu, Debargha Mukherjee, Balu Adsumilli, Yao Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent works on learned image coding using autoencoder models have achieved promising results in rate-distortion performance. Typically, an autoencoder is used to transform an image into a latent tensor, which is then quantized and entropy coded. Based on a work by Ballé et al., we adapted the autoencoder with a hyperprior model to code images in a block-based approach. When the autoencoder model is directly applied to code small image blocks, spatial redundancy in the larger image cannot be fully utilized, resulting in a decrease in ratedistortion performance. We propose a method to utilize border information in the entropy coding of latent and hyper-latent tensors, which has achieved promising results. We show that using intra-prediction to help entropy coding is more effective than applying a convolutional autoencoder with hyper priors to intra-prediction residual blocks.

Original languageEnglish (US)
Title of host publication2021 Picture Coding Symposium, PCS 2021 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665425452
DOIs
StatePublished - Jun 2021
Event35th Picture Coding Symposium, PCS 2021 - Virtual, Online
Duration: Jun 29 2021Jul 2 2021

Publication series

Name2021 Picture Coding Symposium, PCS 2021 - Proceedings

Conference

Conference35th Picture Coding Symposium, PCS 2021
CityVirtual, Online
Period6/29/217/2/21

Keywords

  • Block-based coding
  • Deep learning
  • Intraprediction
  • Learned image compression

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology

Fingerprint

Dive into the research topics of 'Block-based Learned Image Coding with Convolutional Autoencoder and Intra-Prediction Aided Entropy Coding'. Together they form a unique fingerprint.

Cite this