Abstract
Local explainability methods - those which seek to generate an explanation for each prediction - are increasingly prevalent. However, results from different local explainability methods are difficult to compare since they may be parameterdependant, unstable due to sampling variability, or in various scales and dimensions. We propose GALE, a topology-based framework to extract a simplified representation from a set of local explanations. GALE models the relationship between the explanation space and model predictions to generate a topological skeleton, which we use to compare local explanation outputs. We demonstrate that GALE can not only reliably identify differences between explainability techniques but also provides stable representations. Then, we show how our framework can be used to identify appropriate parameters for local explainability methods. Our framework is simple, does not require complex optimizations, and can be broadly applied to most local explanation methods.
Original language | English (US) |
---|---|
Pages (from-to) | 322-331 |
Number of pages | 10 |
Journal | Proceedings of Machine Learning Research |
Volume | 196 |
State | Published - 2022 |
Event | ICML Workshop on Topology, Algebra, and Geometry in Machine Learning, TAG:ML 2022 - Virtual, Online, United States Duration: Jul 20 2022 → … |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability