Manipulation Attacks on Learned Image Compression

Kang Liu, Di Wu, Yangyu Wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning (DL) techniques have shown promising results in image compression compared to conventional methods, with competitive bitrate and image reconstruction quality from compressed latent. However, whereas learned image compression has progressed towards a higher peak signal-to-noise ratio (PSNR) and fewer bits per pixel (bpp), its robustness to adversarial images has never received deliberation. In this work, we investigate the robustness of image compression systems where imperceptibly manipulated inputs can stealthily precipitate a significant increase in the compressed bitrate without compromising reconstruction quality. Such attacks can potentially exhaust the storage or network bandwidth of computing systems and lead to service denial. We term it as a DoS attack on image compressors. To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks. Our white-box attack employs a gradient ascent approach on the entropy estimation of the bitstream as its bitrate approximation. We propose DCT-Net simulating JPEG compression with architectural simplicity and lightweight training as the substitute in the black-box attack, enabling fast adversarial transferability. Our results on six image compression architectures, each with six different bitrate qualities (thirty-six models in total), show that they are surprisingly fragile, where the white-box attack achieves up to 55&#x00D7; and black-box 2&#x00D7; bpp increase, respectively, revealing the devastating fragility of DL-based compression models. To improve robustness, we propose a novel compression architecture <monospace>factorAtn</monospace> incorporating attention modules and a basic factorized entropy model that presents a promising trade-off between rate-distortion performance and robustness to adversarial attacks and surpasses existing learned image compressors.

Original languageEnglish (US)
Pages (from-to)1-14
Number of pages14
JournalIEEE Transactions on Artificial Intelligence
DOIs
StateAccepted/In press - 2023

Keywords

  • Adversarial machine learning
  • Bit rate
  • Closed box
  • Compressors
  • DoS attack
  • Entropy
  • Glass box
  • Image coding
  • Robustness
  • image compression
  • robustness

ASJC Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Manipulation Attacks on Learned Image Compression'. Together they form a unique fingerprint.

Cite this