Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation

Neil Jethani, Adriel Saporta, Rajesh Ranganath

Research output: Contribution to journalConference articlepeer-review

Abstract

Feature attribution methods identify which features of an input most influence a model's output. Most widely-used feature attribution methods (such as SHAP, LIME, and Grad-CAM) are “class-dependent” methods in that they generate a feature attribution vector as a function of class. In this work, we demonstrate that class-dependent methods can “leak” information about the selected class, making that class appear more likely than it is. Thus, an end user runs the risk of drawing false conclusions when interpreting an explanation generated by a class-dependent method. In contrast, we introduce “distribution-aware” methods, which favor explanations that keep the label's distribution close to its distribution given all features of the input. We introduce SHAP-KL and FastSHAP-KL, two baseline distribution-aware methods that compute Shapley values. Finally, we perform a comprehensive evaluation of seven class-dependent and three distribution-aware methods on three clinical datasets of different high-dimensional data types: images, biosignals, and text.

Original languageEnglish (US)
Pages (from-to)8925-8953
Number of pages29
JournalProceedings of Machine Learning Research
Volume206
StatePublished - 2023
Event26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023 - Valencia, Spain
Duration: Apr 25 2023Apr 27 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation'. Together they form a unique fingerprint.

Cite this