TY - JOUR
T1 - Manipulation Attacks on Learned Image Compression
AU - Liu, Kang
AU - Wu, Di
AU - Wu, Yangyu
AU - Wang, Yiru
AU - Feng, Dan
AU - Tan, Benjamin
AU - Garg, Siddharth
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2024/6/1
Y1 - 2024/6/1
N2 - Deep learning (DL) techniques have shown promising results in image compression compared to conventional methods, with competitive bitrate and image reconstruction quality from compressed latent. However, whereas learned image compression has progressed toward a higher peak signal-to-noise ratio (PSNR) and fewer bits per pixel (bpp), its robustness to adversarial images has never received deliberation. In this work, we investigate the robustness of image compression systems where imperceptibly manipulated inputs can stealthily precipitate a significant increase in the compressed bitrate without compromising reconstruction quality. Such attacks can potentially exhaust the storage or network bandwidth of computing systems and lead to service denial. We term it as a denial-of-service attack on image compressors. To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks. Our white-box attack employs a gradient ascent approach on the entropy estimation of the bitstream as its bitrate approximation. We propose discrete cosine transform-Net simulating joint photographic experts group (JPEG) compression with architectural simplicity and lightweight training as the substitute in the black-box attack, enabling fast adversarial transferability. Our results on six image compression architectures, each with six different bitrate qualities (thirty-six models in total), show that they are surprisingly fragile, where the white-box attack achieves up to 55× and black-box 2× bpp increase, respectively, revealing the devastating fragility of DL-based compression models. To improve robustness, we propose a novel compression architecture factorAtn incorporating attention modules and a basic factorized entropy model that presents a promising tradeoff between rate-distortion performance and robustness to adversarial attacks and surpasses existing learned image compressors.
AB - Deep learning (DL) techniques have shown promising results in image compression compared to conventional methods, with competitive bitrate and image reconstruction quality from compressed latent. However, whereas learned image compression has progressed toward a higher peak signal-to-noise ratio (PSNR) and fewer bits per pixel (bpp), its robustness to adversarial images has never received deliberation. In this work, we investigate the robustness of image compression systems where imperceptibly manipulated inputs can stealthily precipitate a significant increase in the compressed bitrate without compromising reconstruction quality. Such attacks can potentially exhaust the storage or network bandwidth of computing systems and lead to service denial. We term it as a denial-of-service attack on image compressors. To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks. Our white-box attack employs a gradient ascent approach on the entropy estimation of the bitstream as its bitrate approximation. We propose discrete cosine transform-Net simulating joint photographic experts group (JPEG) compression with architectural simplicity and lightweight training as the substitute in the black-box attack, enabling fast adversarial transferability. Our results on six image compression architectures, each with six different bitrate qualities (thirty-six models in total), show that they are surprisingly fragile, where the white-box attack achieves up to 55× and black-box 2× bpp increase, respectively, revealing the devastating fragility of DL-based compression models. To improve robustness, we propose a novel compression architecture factorAtn incorporating attention modules and a basic factorized entropy model that presents a promising tradeoff between rate-distortion performance and robustness to adversarial attacks and surpasses existing learned image compressors.
KW - Adversarial machine learning
KW - DoS attack
KW - image compression
KW - robustness
UR - http://www.scopus.com/inward/record.url?scp=85179795640&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85179795640&partnerID=8YFLogxK
U2 - 10.1109/TAI.2023.3340982
DO - 10.1109/TAI.2023.3340982
M3 - Article
AN - SCOPUS:85179795640
SN - 2691-4581
VL - 5
SP - 3083
EP - 3097
JO - IEEE Transactions on Artificial Intelligence
JF - IEEE Transactions on Artificial Intelligence
IS - 6
ER -