Manipulation Attacks on Learned Image Compression

Kang Liu, Di Wu, Yangyu Wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning (DL) techniques have shown promising results in image compression compared to conventional methods, with competitive bitrate and image reconstruction quality from compressed latent. However, whereas learned image compression has progressed toward a higher peak signal-to-noise ratio (PSNR) and fewer bits per pixel (bpp), its robustness to adversarial images has never received deliberation. In this work, we investigate the robustness of image compression systems where imperceptibly manipulated inputs can stealthily precipitate a significant increase in the compressed bitrate without compromising reconstruction quality. Such attacks can potentially exhaust the storage or network bandwidth of computing systems and lead to service denial. We term it as a denial-of-service attack on image compressors. To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks. Our white-box attack employs a gradient ascent approach on the entropy estimation of the bitstream as its bitrate approximation. We propose discrete cosine transform-Net simulating joint photographic experts group (JPEG) compression with architectural simplicity and lightweight training as the substitute in the black-box attack, enabling fast adversarial transferability. Our results on six image compression architectures, each with six different bitrate qualities (thirty-six models in total), show that they are surprisingly fragile, where the white-box attack achieves up to 55× and black-box 2× bpp increase, respectively, revealing the devastating fragility of DL-based compression models. To improve robustness, we propose a novel compression architecture factorAtn incorporating attention modules and a basic factorized entropy model that presents a promising tradeoff between rate-distortion performance and robustness to adversarial attacks and surpasses existing learned image compressors.

Original languageEnglish (US)
Pages (from-to)3083-3097
Number of pages15
JournalIEEE Transactions on Artificial Intelligence
Volume5
Issue number6
DOIs
StatePublished - Jun 1 2024

Keywords

  • Adversarial machine learning
  • DoS attack
  • image compression
  • robustness

ASJC Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Manipulation Attacks on Learned Image Compression'. Together they form a unique fingerprint.

Cite this