Visual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attribution

Ying Wang, Tim G.J. Rudner, Andrew Gordon Wilson

Research output: Contribution to journalConference articlepeer-review

Abstract

Vision-language pretrained models have seen remarkable success, but their application to safety-critical settings is limited by their lack of interpretability.To improve the interpretability of vision-language models such as CLIP, we propose a multimodal information bottleneck (M2IB) approach that learns latent representations that compress irrelevant information while preserving relevant visual and textual features.We demonstrate how M2IB can be applied to attribution analysis of vision-language pretrained models, increasing attribution accuracy and improving the interpretability of such models when applied to safety-critical domains such as healthcare.Crucially, unlike commonly used unimodal attribution methods, M2IB does not require ground truth labels, making it possible to audit representations of vision-language pretrained models when multiple modalities but no ground-truth data is available.Using CLIP as an example, we demonstrate the effectiveness of M2IB attribution and show that it outperforms gradient-based, perturbation-based, and attention-based attribution methods both qualitatively and quantitatively.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: Dec 10 2023Dec 16 2023

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Visual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attribution'. Together they form a unique fingerprint.

Cite this