Abstract
Feature attribution methods identify which features of an input most influence a model's output. Most widely-used feature attribution methods (such as SHAP, LIME, and Grad-CAM) are “class-dependent” methods in that they generate a feature attribution vector as a function of class. In this work, we demonstrate that class-dependent methods can “leak” information about the selected class, making that class appear more likely than it is. Thus, an end user runs the risk of drawing false conclusions when interpreting an explanation generated by a class-dependent method. In contrast, we introduce “distribution-aware” methods, which favor explanations that keep the label's distribution close to its distribution given all features of the input. We introduce SHAP-KL and FastSHAP-KL, two baseline distribution-aware methods that compute Shapley values. Finally, we perform a comprehensive evaluation of seven class-dependent and three distribution-aware methods on three clinical datasets of different high-dimensional data types: images, biosignals, and text.
Original language | English (US) |
---|---|
Pages (from-to) | 8925-8953 |
Number of pages | 29 |
Journal | Proceedings of Machine Learning Research |
Volume | 206 |
State | Published - 2023 |
Event | 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023 - Valencia, Spain Duration: Apr 25 2023 → Apr 27 2023 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability